Flea86 has joined ##openfpga
emeb has left ##openfpga [##openfpga]
<goran-mahovlic> @flea86 we used your board in comparison chart so if you can please let us know if everything is ok with you so we can change it if we got something wrong!! tnx!
futarisIRCcloud has joined ##openfpga
pie__ has quit [Ping timeout: 252 seconds]
Maylay has quit [Quit: Pipe Terminated]
pie_ has joined ##openfpga
__rob2 has left ##openfpga [##openfpga]
Miyu has quit [Ping timeout: 250 seconds]
<azonenberg> tnt, zkms: so a guy on twitter discovered that the phy needs the clock ac coupled if the voltage is <2.5V
<azonenberg> so i reworked the 1.8V oscillator's terminating resistor to an ac coupling cap
<azonenberg> same behavior so that wasn't it
dj_pi has joined ##openfpga
dj_pi has quit [Ping timeout: 250 seconds]
unixb0y has quit [Ping timeout: 246 seconds]
unixb0y has joined ##openfpga
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
Morn_ has quit [Quit: Ping timeout (120 seconds)]
Morn_ has joined ##openfpga
soylentyellow_ has joined ##openfpga
soylentyellow__ has quit [Ping timeout: 250 seconds]
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
dj_pi has joined ##openfpga
Maylay has joined ##openfpga
ayjay_t_ has joined ##openfpga
ayjay_t has quit [Read error: Connection reset by peer]
TD-Linux has quit [Ping timeout: 250 seconds]
TD-Linux has joined ##openfpga
pie__ has joined ##openfpga
pie_ has quit [Remote host closed the connection]
rohitksingh_work has joined ##openfpga
Bike has quit [Quit: Lost terminal]
Richard_Simmons has quit [Ping timeout: 252 seconds]
Bob_Dole has joined ##openfpga
ayjay_t_ has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
dj_pi has quit [Ping timeout: 240 seconds]
wolfspraul has quit [Quit: leaving]
wolfspraul has joined ##openfpga
<tnt> azonenberg: could somehow the 3v3 through the led (when off) go back in the chip DVVDH supply trhough protection diodes and mess with things ?
Miyu has joined ##openfpga
pie__ has quit [Remote host closed the connection]
pie__ has joined ##openfpga
<tnt> oh wait nm, you haven't the led connected to the phy ... doh ...
Miyu has quit [Ping timeout: 250 seconds]
<daveshah> whitequark: that's really awesome
<daveshah> Thanks!
m4ssi has joined ##openfpga
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
jcreus has quit [Quit: Konversation terminated!]
<swetland> goran-mahovlic: that's a nice looking board
<Flea86> swetland: Indeed.
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
Flea86 has quit [Quit: Goodbye and thanks for all the dirty sand ;-)]
_whitelogger has joined ##openfpga
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
paul_99 has joined ##openfpga
<paul_99> hi
<paul_99> I'm wanting to use the lattice radix-2 fft_butterfly ip to build a 64 point fft. The ip takes N bits for re/im/twiddle and outputs each of re/im as 2N bits
<paul_99> presumably this is so the arithmetic doesn't overflow, but am I meant to be rescaling this myself, or only using some of the bits
<paul_99> at each stage
<paul_99> the full FFT compiler ip takes N bits for each sample and outputs N bits for each bin
<paul_99> so wondering what they do in the middle between each butterfly stage
<tnt> usually you rescale every couple of stages
<tnt> The xilinx fft docs explains that with the 'rescale scheduling' thing.
soylentyellow__ has joined ##openfpga
soylentyellow_ has quit [Ping timeout: 258 seconds]
<paul_99> tnt: found the pages explaining it, thank you
rohitksingh_work has quit [Read error: Connection reset by peer]
Miyu has joined ##openfpga
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
elaforest has joined ##openfpga
soylentyellow_ has joined ##openfpga
rohitksingh has joined ##openfpga
soylentyellow__ has quit [Ping timeout: 258 seconds]
soylentyellow__ has joined ##openfpga
soylentyellow_ has quit [Ping timeout: 252 seconds]
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
emeb has joined ##openfpga
zng has quit [Quit: ZNC 1.8.x-nightly-20181211-72c5f57b - https://znc.in]
zng has joined ##openfpga
<goran-mahovlic> @swetland tnx
jevinskie has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jevinskie has joined ##openfpga
<whitequark> daveshah: so
<whitequark> i think depth relaxation is done?
<whitequark> it doesn't assert on my test inputs anymore
<whitequark> can you try to break it
<daveshah> sure
<daveshah> wondering why I didn't see an improvement, forgot to call with -relax
<daveshah> derp
<daveshah> hmm, still doesn't seem happy
<daveshah> picorv32 has gone up from ~2000 LUT4 before to 3012 LUT4 (~1500 with abc)
<whitequark> daveshah: no improvement is to be expected
<whitequark> depth relaxation *increases* the number of LUTs substantially
<whitequark> by break i mean make it asser
<daveshah> oh, this is just depth relaxation
<whitequark> *assert
<daveshah> well it seems fine in that sense :)
<whitequark> -debug-relax adds some more consitsenty checks btw
<daveshah> that seems happy on a couple of largish designs too :)
<daveshah> timing looks almost unchanged with picorv32
<whitequark> excellent
<whitequark> and that's exactly what it should be
<daveshah> yep
<whitequark> virtually unchanged with -optarea 0, perhaps slightly degraded with -optarea 1
<whitequark> and higher
<whitequark> okay. now how do i actually implement df-map
<whitequark> i guess i first need to generate cuts
<daveshah> or MFFCs first?
<whitequark> mm
<whitequark> speaking of MFFCs
<whitequark> i think i finally figured out properly why df-map doesn't increase depth
<whitequark> so, each lut is either a part of an mffc, or is a complete mffc
<whitequark> if you imagine the gate IR with duplicated nodes
<whitequark> ergo, df-map will at least pack that LUT back
<whitequark> since, k-feasible mffc in, k-feasible mffc out
<whitequark> this is not quite the same as df-map working directly on luts, though!
<whitequark> so what i'm thinking here is
<whitequark> i construct an mffc rooted in some LUTv
<whitequark> that includes, say, three luts
<whitequark> i now take this subgraph in gate IR and run df-map on it
<whitequark> i think this is how you can get something like fig 8
<daveshah> I think this seems reasonable
<whitequark> i'm not entirely sure though
<whitequark> one good thing is, i can make it an assert, that df-map never increases lut count or depth
<whitequark> and then just run bugpoint
<whitequark> or vloghammer+bugpoint
<daveshah> That seems sensible
m4ssi has quit [Remote host closed the connection]
rohitksingh has quit [Ping timeout: 252 seconds]
<daveshah> I know you've been asking for this for a while. Hopefully it makes some of your crazy demos a bit easier :D
<whitequark> oooh
Adluc has quit [Excess Flood]
Adluc has joined ##openfpga
flaviusb has quit [Remote host closed the connection]
<mithro> Morning everyone
<daveshah> Evening mithro
kem_ has quit [Quit: Connection closed for inactivity]
rohitksingh has joined ##openfpga
pie__ has quit [Remote host closed the connection]
pie__ has joined ##openfpga
wpwrak has quit [Ping timeout: 250 seconds]
<mithro> daveshah: Cool!
Laksen has joined ##openfpga
Laksen has quit [Remote host closed the connection]
wpwrak has joined ##openfpga
rohitksingh has quit [Ping timeout: 252 seconds]
pie__ has quit [Remote host closed the connection]
pie_ has joined ##openfpga
<tnt> daveshah: oh, thanks ! Indeed that should reveal any issues I might have :p
X-Scale has quit [Ping timeout: 268 seconds]
X-Scale` has joined ##openfpga
X-Scale` is now known as X-Scale
pie_ has quit [Remote host closed the connection]
pie_ has joined ##openfpga
pie_ has quit [Remote host closed the connection]
azonenberg_work has joined ##openfpga
<shapr> My brain still has a name collision between Dance Dance Revolution and ...
<azonenberg_work> shapr: you mean people don't dance at Double Data Rate?
<azonenberg_work> I thought you were supposed to move you left foot at the start of the beat and the right foot halfway to the next one
<shapr> :-D
<shapr> I'm still angry that verilog can't parameterize clock dividers by the actual PLL value
<azonenberg_work> what do you mean?
<whitequark> use migen :p
<azonenberg_work> I have a core for xilinx parts that calculates pll vco config and dividers statically at elaboration time in straight verilog 2005
<azonenberg_work> just give it target periods for each output
<azonenberg_work> error reporting still needs some work if it can't find a valid config (you get a hard-to-debug synthesis error), but if you give it legal inputs it works fine
<shapr> Does it work with yosys + ice40 ?
<azonenberg_work> It's a wrapper around the 7 series MMCM only for now but you could easily drop in other pll ip
<whitequark> azonenberg_work: that's the main problem with this approach
<whitequark> erro reporting
<azonenberg_work> whitequark: well the BIG problem
<azonenberg_work> is that vivado doesn't support $display and $finish during synthesis
<whitequark> like i had my own verilog hacks for that
<azonenberg_work> so there is no way to print errors during elaboration
<shapr> seems to me the language should handle clock division
<azonenberg_work> ISE did
<whitequark> and the migen code is far more pleasant to use
<azonenberg_work> So you could initial begin / $display("PLL VCO frequency %d out of range"); / $finish; / end
<azonenberg_work> and it would fail synthesis with a nice readable error
<whitequark> the thing is i consider error reporting a core programming language featuer
<azonenberg_work> vivado just chokes with no details
<whitequark> so anything that doesnt provide that is unusable without further thought
<shapr> whitequark: I'll switch to clash-lang once I have a better understanding of verilog
<azonenberg_work> i consider that a vivado bug :p
<shapr> since Haskell is my language of choice
<shapr> but I gotta start with Verilog
<whitequark> azonenberg_work: not mutually exclusive
<whitequark> azonenberg_work: it's *also* a specification bug
<azonenberg_work> That i agree with
<whitequark> in 1364.1
<whitequark> one of the many bugs in that spec
tmeissner has joined ##openfpga
edmund has quit [Quit: Ex-Chat]
<azonenberg_work> ooooh
<azonenberg_work> SO they don't support $display, but this is almost as good
<azonenberg_work> you could easily make a wrapper that `ifdef VIVADO this instead of $display
<whitequark> i thought verilog was supposed to be portable...
<azonenberg_work> whitequark: that's like saying C is supposed to be portable
<azonenberg_work> it is, until you start doing things like casting between pointer types or making API cfalls
<azonenberg_work> calls*
<azonenberg_work> or really anything useful :p
<whitequark> azonenberg_work: yes, this is also why i dont write c
<whitequark> one of the reasons
<whitequark> so yes
<shapr> Mostly I want to port code from other boards to the BeagleWire, and I have to manually change clock division
<shapr> I convinced m_w to create a forum for the BeagleWire, but I'm the only person who's ever posted :-( I was wrong
<mwk> hmm
<mwk> I don't suppose there's some existing Python parser for BSDL?
<mwk> oh huh.
<mwk> huh, it actually works, nice
<shapr> I'm surprised
<q3k> huh
egg|egg has quit [Read error: Connection reset by peer]
edmund has joined ##openfpga
oeuf has joined ##openfpga
<tnt> assign o = sel ? i : (r + { 12'h000, i[3:0] }); ( o/i/r are all 16 bits ).
<tnt> This should be able to fit in 16 LUTs or am I crazy.
<whitequark> tnt: on ice40?
<tnt> yes
<tnt> For the 4 LSB, it's the classic 'add with bypass'. For the 12 other bits, if you use the 'sel' bit as one of the carry input, this should work.
<whitequark> tnt: yeah it does fit
<whitequark> oh wait
<whitequark> ok, so regular synth_ice40 infers 32 LUTs
<tnt> Ah yeah, if I write it like assign o = sel ? i : (r + { {12{sel}}, i[3:0] }); then it works.
<whitequark> synth_ice40 -relut infers 27 LUTs
<whitequark> yeah, only with -relut though
<tnt> Right, but that's standard in my option list :D
<whitequark> ah
<tnt> I've never seen any downsides to having it so far.
<tnt> (thanks for writing it btw, it's been very good to me !)
<whitequark> there aren't
<adamgreig> what does relut do?
<whitequark> abc cannot look into adders with hard logic
<whitequark> so it infers LUTs that can be packed further with some other LUTs
<whitequark> -relut invokes an arch-independent LUT optimization pass that I wrote
<azonenberg_work> whitequark: here's a fun question
<azonenberg_work> what is abc good for if you have your own lut optimizer?
<whitequark> azonenberg_work: opt_lut does not consider delay
<azonenberg_work> how far are we from being able to remove abc from the workflow entirely? (that is likely to be necessary if we want to preserve netnames etc)
<whitequark> so if you remove abc entirely, you get like 5 times slower logic
<whitequark> however
<whitequark> flowmap infers lut with provably optimal delay
<whitequark> so the logic inferred with flowmap is as fast as that inferred with abc*
<tnt> Well, abc also seems to freely shuffle all the logic around to come up with better one.
<azonenberg_work> whitequark: Do you consider just depth of the graph, or actual lut delay?
<azonenberg_work> in particular, especially on Some Devices
<whitequark> * caveats: there's more of that logic, which influences packing, and flowmap is purely combinatorial, i.e. doesn't do resynthesis
<whitequark> azonenberg_work: just depth currently
<azonenberg_work> there are noticeable differences in Tpd from one lut input to another
mumptai has joined ##openfpga
<whitequark> but it would be straightforward to adjust the cost function
<azonenberg_work> so it would be nice to consider that at some point
<azonenberg_work> and potentially allow editing lut equations to swap inputs
<azonenberg_work> that may or may not be done in the same optimization pass
<daveshah> I know that you can tell abc about this
<azonenberg_work> but it's something to consider
<whitequark> azonenberg_work: oh, interesting
<daveshah> Yosys doesn't use this atm though
<whitequark> that's definitely doable and easy
<sorear> also, whitequark's work is entirely LUT mapping and AFAIK doesn't make any progress towards removing abc from yosys-qflow
<tnt> daveshah: does the ice40 also have different Tpd per lut input ?
<daveshah> Yes, about 20% difference iirc
<whitequark> sorear: yeah, i do not currently care about asics at all
<azonenberg_work> whitequark, tnt: in greenpak (the only part i looked at) i was able to measure Tpd of a lut2 to lut4 on all inputs
<daveshah> We do do LUT permutation in PnR atm
<tnt> interesting. I never paid attention to that.
<daveshah> But don't consider while delay while doing so, only routability
<azonenberg_work> i dont remember absolute numbers, but there were significant differences
<azonenberg_work> iirc MSB was fastest and LSB was slowest
<whitequark> interesting
<azonenberg_work> Which makes sense if the lut is implemented as a mux cascade
<whitequark> so thats one reason luts arent glitchless
<daveshah> It would be quite easy to add the relative delays to pips in nextpnr to do LUT input swapping
<azonenberg_work> with the inputs constant and the selectors based on the lut inputs
<azonenberg_work> inputs bitstream programmed*
<tnt> azonenberg_work: yeah, I would expect it to be given it's probably a mux tree. But not sure if that was specified for the ice at all or if they just said 'heh, just pick worse case'.
<azonenberg_work> greenpak did not specify
<azonenberg_work> this is based on my own post silicon characterization data
<daveshah> tnt: that's what they did for ecp5
<azonenberg_work> the differences were significant and easily measurable
<tnt> azonenberg_work: right. But then we need your data in the timing model :p
<azonenberg_work> NOw that my lab is coming together working on that is a todo
<azonenberg_work> i want to make a better characterization flow for some other parts too
<whitequark> "give me your data and your timing model"
<azonenberg_work> whitequark: the problem now is my data is incomplete :p
<azonenberg_work> i want to do characterization of coolrunner as well
<whitequark> a lesser known predecessor to terminator
<azonenberg_work> so we can optimize choice of product terms etc based on measured delays
<tnt> azonenberg_work: yeah, been following your pics on twitter and damn ... I wish I wasn't in a 50 sq meter appt :p
<azonenberg_work> tnt: lol i'm in a tiny apt too, and a construction site
<azonenberg_work> which is not really fit for living in but i've managed to set up one little table of lab gear in a corner
<azonenberg_work> this is going to be an awesome lab when i am *done*
<azonenberg_work> but that's a ways out
<cyrozap> mwk, shapr, q3k: lol why'd you think it wouldn'
<cyrozap> *wouldn't work?
<azonenberg_work> tnt: (37 m^2 of climate controlled, ESD floored lab space and another 18 m^2 of adjacent office/conference space)
<q3k> cyrozap: oh hello
<azonenberg_work> whitequark: anyway, example of a potentially interesting optimization would be if you had 8 inputs feeding through two lut4s into a third lut
<q3k> cyrozap: mostly because bdsl is basically vhdl
<cyrozap> Also, I apologize for that library being so slow--I'm currently working on a replacement in Rust with Nom.
<q3k> cyrozap: which is basically ada
<tnt> azonenberg_work: does the climate control include some form of particulate/dust control ?
<azonenberg_work> You might be able to not only reorder inputs between luts, but push some signals between luts
<azonenberg_work> to use the critical path on the fastest part of each lut
<q3k> cyrozap: which is basically pain to parse/analyze/elaborate when you're not ada
<sorear> what month and year was the house supposed to be livable in?
<q3k> cyrozap: nothing against you or your code
<q3k> cyrozap: just that's vhdl and vhdl derivations are painful like that
<whitequark> azonenberg_work: right now my priority is area optimization
<azonenberg_work> sorear: *original* goal?
<azonenberg_work> or current expected?
<azonenberg_work> or what
<whitequark> azonenberg_work: i get the same delay as abc but about twice as much area on synthetic benchmarks
<whitequark> (less on real ones)
<azonenberg_work> whitequark: eventually being able to specify a balance would be nice
<azonenberg_work> relative weights for area and timing targets or something
<daveshah> I noticed delay is a bit behind abc on picorv32
<whitequark> azonenberg_work: that is my current focus
<whitequark> azonenberg_work: there is a -optarea switch that controls depth relaxation
<azonenberg_work> sorear: original goal was april 2018 move-in ready
<daveshah> I think this is because of the missing logic optimisation and balancing atm
<whitequark> azonenberg_work: increasing it by 1 trades off 1 logic level to less area
<daveshah> Neither should be that hard to add in Yosys
<azonenberg_work> new plan is "it'll be done when it's done" :p
<whitequark> daveshah: some of it might be becauase of more congested routing, no?
<azonenberg_work> sorear: https://i.imgur.com/9lvyFCI.png is our current dependency chart
<daveshah> whitequark: yes, possibly also
<cyrozap> q3k: BSDL is actually a super restricted subset of VHDL--it's really not much more than a configuration language like TOML, but of course they didn't have that back in the 80's.
<cyrozap> Err, 90's.
<sorear> is it better or worse than (a) ASN.1 (b) ASC X.12/EDIFACT
<q3k> cyrozap: so is it specced to be restricted, or are most BDSL files out there de facto okay?
<azonenberg_work> whitequark: another thing i want to see eventually is iterative/feedback through P&R back into resynthesis or similar
<q3k> sorear: different
<azonenberg_work> for example, if you have a lut driving two loads that are very far apart spatially, despite only having a fanout of 2
<whitequark> azonenberg_work: resynthesis will be FlowSYN but it's somewhat far off right now
<azonenberg_work> it might make sense to replicate the driver
<azonenberg_work> whitequark: i mean specifically p&r guided *iterative* resynthesis
<azonenberg_work> where you find parts of the physical design that have trouble making timing and edit the netlist to optimize
<whitequark> resynthesis or duplication?
<azonenberg_work> i dont know of any tools that do this now
<azonenberg_work> Either
<whitequark> duplication is easy to integrate into the current flowmap
<oeuf> mrow
<whitequark> just give me a list of what to duplicate
<whitequark> basically
<azonenberg_work> for example, retiming with timing info from an initial p&r attempt
<azonenberg_work> whitequark: you're missing the point
<daveshah> Duplication probably doesn't even need synthesis
<azonenberg_work> duplication is easy, knowing what to duplicate can't be done at synthesis time
<daveshah> Just copy the cell during pnr
<whitequark> azonenberg_work: my point is that i work on synthesis tools, not p&r tools :P
<azonenberg_work> yeah i know
<azonenberg_work> daveshah: i think more advanced optimization is possible too if you have feedback and blurred lines between synth and par
<azonenberg_work> for example, say a particular portion of the system is super routing congested
<daveshah> Yes, definitely
<azonenberg_work> it might make sense to re-place it ex post facto
<azonenberg_work> spread out more
<cyrozap> sorear: Probably better, though that's not really saying much ;)
<azonenberg_work> or move some registers earlier/later in the netlist
<azonenberg_work> or even physically move some registers
<whitequark> i feel like we need a basic workflow first
<azonenberg_work> like, moving a pipeline register to be closer to a load that has a lot of combinatorial logic after the next register
<azonenberg_work> to allow easier retiming
<azonenberg_work> whitequark: yes, of course
<daveshah> I think most of these transformations are best done inside the PnR tool rather than going back to synthesis
<azonenberg_work> i'm just thinking ahead about things the architectures should be able to target
<azonenberg_work> daveshah: yeah, agreed
<azonenberg_work> in particular though i think placement and routing shouldn't be waterfall flow
<azonenberg_work> routing needs to be able to feed back into placement and some pre-placement optimizations
<azonenberg_work> some timing problems are very hard to see until you start routing the design
<cyrozap> q3k: It's specced restricted. For one example, by a strict interpretation of the standard, you can't even change the order that the BSDL description statements appear in.
<azonenberg_work> cyrozap: can you parse bsdl with regexes?
<azonenberg_work> (only half kidding)
<whitequark> you can parse html with regexes
<q3k> cyrozap: oh, that's pretty good then. i thought it was much more liberal.
* qu1j0t3 leaves channel for a bit
<azonenberg_work> whitequark: sure, and you can make regexes for recognizing valid email addresses too
<azonenberg_work> it's only like 4 screens of text iirc?
<whitequark> azonenberg_work: no no, email address grammar is actually regular
<whitequark> html is context free
<cr1901_modern> azonenberg_work: That was a joke
<whitequark> but some languages offer you regexps that have subexpression calls, like oniguruma
<whitequark> (ruby's re engine)
<cr1901_modern> See classic SO answer on parsing html w/ regex.
<cr1901_modern> it devolves into Zalgo
<whitequark> so, ruby's regexps can parse any context free language
<cyrozap> azonenberg_work: You can actually parse a surprising amount of it with regexes, but not completely since it has string concatenation and most of the actual BSDL configuration stuff is stored in those strings (which IMO is akin to storing information as JSON in a string value in a TOML file).
<cr1901_modern> The full email regex is still pretty awful
<whitequark> cr1901_modern: only because of the horrible amount of duplication in it
<whitequark> if you can extract parts of it into variables it becomes ok-ish
<shapr> cyrozap: oh, I didn't know you wrote that :-)
azonenberg_work has quit [Ping timeout: 268 seconds]
<cyrozap> BSDL is a pile of crap, but at least it isn't a _hazardous_ pile of crap like XML, YAML, ASN.1, and the like.
<whitequark> that's because barely anyone uses BSDL for anything
<whitequark> imagine if someone tried to morph BSDL into a general purpose configuration format
<whitequark> and you'd have twelve parfsers
<whitequark> BSDL over HTTP
<whitequark> as an RPC framing
<cr1901_modern> parfsers is an amusing typo, and I don't know why
<whitequark> like barfsers
<sorear> bay area rapid…
<cyrozap> FARTS!
azonenberg_work has joined ##openfpga
<azonenberg_work> Back (laptop is dying on me)
<azonenberg_work> what did i miss?
<sorear> fart jokes
<cr1901_modern> indeed
<cr1901_modern> and parfsers and barfsers
<sorear> (html is context free) <span id="hello"></span><span id="hello"></span>
<whitequark> hm?
tmeissner has quit [Quit: Textual IRC Client: www.textualapp.com]
* cr1901_modern yawns and considers a catnap
<mwk> sorear: that's valid html
<mwk> everything is valid html
<sorear> "valid html" is a bit of a value judgement
<mwk> html5 spec says how to parse any input as html, no matter what kind of broken it is
<azonenberg_work> mwk: meanwhile any time i ever had to do anything with HTML i generated XHTML strict
<azonenberg_work> :p
<azonenberg_work> And when I redo my website and finally put something on antikernel.net everything there will be XHTML 1.0 Strict as well
<azonenberg_work> (with no javascript)
<whitequark> azonenberg_work: xhtml is dead though
<whitequark> i'm not even sure if browsers try to render it correctly anymore
<azonenberg_work> whitequark: the W3C seems to say otherwise
<cr1901_modern> I already do "no javascript". Unfortunately, I can't make CSS look good to save my life, so I have an awful color-blind friendly color scheme.
<cr1901_modern> awful yet* CB friendly
<azonenberg_work> Their website is <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<mwk> of course they do
<cr1901_modern> (or it used to be)
<mwk> again, everything is valid html
<whitequark> mwk: it might not have the same semantics as actual xhtml whene parsed under html5 rules
<whitequark> which is my point
<whitequark> azonenberg_work: browsers do not follow w3c for a really long time
<whitequark> it's whatwg now
<sorear> browsers follow whatwg's HTML spec
<whitequark> right, in context of html
<sorear> there's a large fraction of web platform specs where the work still happens on w3c
<sorear> so w3c is still relevant as an institution
<azonenberg_work> And from what i am seeing in some charts on wikipedia, XHTML 1.0 / 1.1 strict are rendered in standards mode by all relatively recent (since IE8) browsers
<sorear> "standards mode" is something mostly unrelated
<azonenberg_work> that should mean no matter what new features are added to the browser
<azonenberg_work> that doctype should render identically for the foreseeable future across everything
<azonenberg_work> especially for something as old as xhtml 1.1 strict
<mwk> um.
<azonenberg_work> that should work with any browser made in the last ten years and anything coming up in the near to long term
<whitequark> azonenberg_work: that's not true
<whitequark> i mean
<sorear> "standards mode" is from https://quirks.spec.whatwg.org/ and I think it's just about a few specific IE5 compat hacks
<sorear> not related to xhtml at all
<whitequark> apart from what sorear said, acid3 no longer passes in modern browsers
<whitequark> or acid2
<whitequark> absolute compatibility has not been regarded as useful among browser vendors for quite a while
<whitequark> in many cases it's for security reasons, in many cases it's somewhat arbitrary, like the shit going on with <audio>
<azonenberg_work> whitequark: well none of my stuff has any active content or fancy media
<azonenberg_work> it'll be plain old block elements, text, static images, and minimal css
<sorear> the WebAudio drama makes me glad I basically hate all noise
<whitequark> azonenberg_work: sure, what i mean is that attempts to use xhtml are probably misguided
<whitequark> there's no harm but no benefit either
<cr1901_modern> azonenberg_work: And Idk if you can use things like video/audio w/o JS, so I think you're safe :P
<whitequark> it's just pointless
<whitequark> cr1901_modern: you can
<whitequark> <video> and <audio> work
<whitequark> see my website
<sorear> remember back when you had to use <object> and <embed> with a very specific fallback dance
<whitequark> yes
<whitequark> unfortunately, i do
<whitequark> sorear: i'm still bitter that you have to use <object> for most svg
<whitequark> (anything that has an href inside)
<azonenberg_work> whitequark: i wish there was an option to standardize on "minimal least common denominator rendering features that all browsers made this century support pixel identical, with no clientside active content"
<cr1901_modern> hahahaha HAHAHA
<sorear> I'm simultaneously that, but also wish <img> forbade animation
<azonenberg_work> sorear: if you want to be strict an animated gif is somewhat "active" even if not executing code
<whitequark> azonenberg_work: css has a *lot* of animation in it
<whitequark> hell, there's an entire subgroup of css specifically dedicated to animation
<azonenberg_work> like i said, that's something this minimal mode would simply not support
<whitequark> virtually no one wants that
<whitequark> like
<whitequark> you could just publish your stuff over gopher
<sorear> if you want "pixel identical" you get into Exciting Numerical Precision Issues
<whitequark> the 2 people who care would still use it
<whitequark> oh that too.
<whitequark> and font issues.
<azonenberg_work> whitequark: well my ultimate dream is something more akin to LaTeX for the web
<oeuf> someone said numerics
<whitequark> azonenberg_work: ... just publish a pdf?
<azonenberg_work> where documents are pre-rendered and essentially inert once created
mumptai has quit [Read error: Connection reset by peer]
<oeuf> an egg will now be summoned
<azonenberg_work> totally non-interactive
<whitequark> you want a pdf.
<tnt> Just a giant immage with an image map for links :p
<whitequark> tnt: that wouldn't work on my 3k display
mumptai has joined ##openfpga
<whitequark> well i mean it would technically work, i'd still close the tab immediately
<cr1901_modern> whitequark: Huh, so it _does_ work w/o JS: https://lab.whitequark.org/notes/2016-10-30/lighting-a-match-at-480fps/ (I did inspect element, expecting the video element to embed a script. But nope, it works!)
<cr1901_modern> azonenberg_work: You probably want postscript
<whitequark> postscript is definitely active.
<whitequark> it's turing-complete
<whitequark> the pdf subset of postscript isn't
<whitequark> i mean, normal postscript is literally just forth for printers.
<cr1901_modern> it's a joke. I guess I do a bad job at distinguishing serious ideas from joke ones.
<sorear> programmability *without user interaction* isn't necessarily bad, helps compression
<sorear> have I mentioned recently that I hate the term "turing complete"
<azonenberg_work> sorear: from my perspective, thinking about security
<sorear> every interesting thing that people do with computers is primitive recursive
<azonenberg_work> i want zero programmability whatsoever in my document formats
<whitequark> azonenberg_work: that seems weird
<whitequark> i mean
<whitequark> gzip is a sort of programming language
<cr1901_modern> sorear: Idk the Ackermann function is interesting, and that's not primitive recursive.
<cr1901_modern> (yes I just strung those words together)
<qu1j0t3> it's not?
<cr1901_modern> according to wikipedia it's not
<sorear> adding a finite timeout to anything makes it PR
<cr1901_modern> and we all know that's always correct
<whitequark> yeah, you can make arbitrary programs terminate by adding the concept of "fuel"
<whitequark> compilers do this a lot
<kc8apf> speaking of LaTeX, I've been playing with CSS Paged Media
<kc8apf> because if i'm going to learn one layout language well, it damn well better work for everything
<sorear> and if you calculate fuel as 2^2^2^2^2^2^2^max(input length, 100) you still have a PR function
flaviusb has joined ##openfpga
<whitequark> daveshah: so i'm thinking
<whitequark> can i calculate mffcs faster than just considering each node in sequence?
<whitequark> i can't tell
<daveshah> Hmm
<whitequark> daveshah: like, if one MFFC includes another
<whitequark> i can include them into each other completely
<whitequark> but this doesn't really help a lot
mumptai has quit [Quit: Verlassend]
elaforest has quit [Ping timeout: 256 seconds]
Bob_Dole has quit [Ping timeout: 268 seconds]
<whitequark> daveshah: actually
<whitequark> i'm not completely sure how to efficiently paratition a graph into MFFCs at all
<daveshah> whitequark: so I guess every node in sequence is a reasonable place to start? Then it seems like just working backwards until you find fanout
<whitequark> daveshah: yeah but it could be a false positive
<whitequark> consider something like this
<whitequark> A, B, C; A->B, A->C, B->C, start at C
<whitequark> if you traverse the graph in order C, A, B you would think A is not in the MFFC
<whitequark> so you'd have to build the entire cone
<whitequark> for every node
<whitequark> and then throw most of it away
<daveshah> Need to think about this more, but does topological ordering help somehow?
Bob_Dole has joined ##openfpga
<whitequark> daveshah: that only works on a DAG
<whitequark> but yeah i think it would help
<whitequark> hm
<whitequark> i probably can't handle SCCs regardless
<daveshah> I'm not going to complain about a tool that makes them a fatal error, tbh
<daveshah> In any case leaving such structures unoptimised or only trivially optimised should be fine in almost all cases
<whitequark> I think the basic flowmap algorithm works on SCCs
<whitequark> ... oh
<whitequark> OH
<whitequark> THIS IS WHY LATCHES ARE BAD FOR TIMING
<whitequark> because the fucking synthesizer jhas no clue what to do with them
<daveshah> Nor do any timing analysis tools