noobineer has quit [Ping timeout: 276 seconds]
Miyu has quit [Ping timeout: 252 seconds]
luvpon has joined ##openfpga
unixb0y has quit [Ping timeout: 252 seconds]
unixb0y has joined ##openfpga
<SolraBizna> I want to try using a DDR3 MRAM at glacial speeds
<sorear> do you have a datasheet handy for the ddr3 mram? curious what the timing is like
<SolraBizna> the datasheet claims maximum tCK is 3.3ns, but I'm curious to know if that's a hard limit
<SolraBizna> PLLs are too AC for me, I have no understanding of their limitations
<sorear> no interleaved bursts …
<sorear> that's a pretty high rated BER, ECC isn't remotely optional
<sorear> if you run that chip at max speed the specification allows for ~50 flipped bits *per second*
<Bob_Dole> wait OhGodAGuardian is the vbios on r9 fury unlocked? can that benefit any from tighter timings?
<Bob_Dole> Good Old Gamer is off trying to do gaming things with an r9 fury nano and I suddenly thought of that
<SolraBizna> I was going to use it with some 16:8 error correction code
luvpon has quit [Ping timeout: 250 seconds]
<SolraBizna> I wouldn't expect DDR3 DRAM to be usable at a few MHz because you couldn't refresh it fast enough
<SolraBizna> But I think the only limitation with DDR3 MRAM would be whether whatever mechanism it uses to quadruple the clock would still work
<sorear> PLLs are a kind of filter and they have a passband
<SolraBizna> (still hopeful) and are those passbands routinely a few orders of magnitude wide? :D
<sorear> the ones in FPGAs are
<SolraBizna> what's the usual failure mode when the source signal is WAY too slow?
<SolraBizna> DC?
<sorear> a PLL will always generate a clock in a certain frequency range
<SolraBizna> (that's what I would expect from a passband filter)
<sorear> until the loop closes, the output will have no phase or frequency relationship with the input
<sorear> there's a "loop filter" between the comparator and the VCO, with a bit of magic I don't quite understand that controls stability under various types of input noise
<sorear> but if you don't get lock the PLL output will just be an unrelated clock
<SolraBizna> that's what I'm afraid of
<SolraBizna> with a decent oscilloscope and one of these ICs it would be easy to test this
<Bob_Dole> should I start trying to source ddr3 mram again?
<SolraBizna> operate the module with a slowly decreasing clock speed, and watch the D lines to see when their transitions stop lining up
<Bob_Dole> for small quantities
<sorear> uh
<sorear> whether the PLL can reach lock when a 100MHz clock is suddenly applied, and whether the PLL can keep lock if a 400MHz clock is applied and gradually lowered to 100MHz, are not the same question
<SolraBizna> that's true
<SolraBizna> my final clock would be in the 4-16MHz range, though
<SolraBizna> which is why I don't hold out much serious hope that it would function
<SolraBizna> but the DDR3 modules are so much cheaper and denser than the asynchronous ones...
<SolraBizna> ($2/mebibyte vs. $20)
<SolraBizna> oh, also: that error rate and retention time is at 70°C, it's *much* better at lower temperatures
rohitksingh_work has joined ##openfpga
soylentyellow has quit [Remote host closed the connection]
soylentyellow has joined ##openfpga
<OhGodAGuardian> television: hey
<OhGodAGuardian> Vivado can be remarkably smart and stupid
<OhGodAGuardian> It cleverly noted Keccak's round constants can be constructed from the round number itself.
Bike has quit [Quit: Lost terminal]
<television> hey
azonenberg_work has joined ##openfpga
<OhGodAGuardian> holy fuck that is a thick rat's nest of wires
<OhGodAGuardian> I need a new way to rotate these.
<OhGodAGuardian> Possibly
<Bob_Dole> really pretty cable routing I find to be equally a PITA as spaghetti routing
<OhGodAGuardian> oh god
<OhGodAGuardian> I think it's blowing up this module to the relative size of Texas
<Bob_Dole> o.o
<Bob_Dole> what fpga? that one OhGodAGirl was talking about for ethash?
<OhGodAGuardian> No, I'm using my own XC7VX690T for fun
<OhGodAGuardian> Bob_Dole: I'm doing raw throughput
<OhGodAGuardian> acceleration I would go about differently
<Bob_Dole> oh I thought she was doing something with an hbm carrying one
<Bob_Dole> 690T.. how many LUTs does that one have, would assuming a bit under 700k from the name be sane there?
jcarpenter2 has joined ##openfpga
<SolraBizna> 7
<SolraBizna> it has 7 LUTs
<SolraBizna> that's my guess
rofl_ has joined ##openfpga
jcarpenter2 has quit [Read error: Connection reset by peer]
<OhGodAGuardian> Bob_Dole: 7-Series slices, 108,300
rofl__ has quit [Ping timeout: 250 seconds]
<SolraBizna> well, I was the closest without going over
<OhGodAGuardian> Bob_Dole: Every slice has 4 6-input LUTs.
<OhGodAGuardian> So, 433,200.
<Bob_Dole> I was guessing based on the 7 series chip the acorn cl 101 having a 100T at the end of its name and ~100k LUT6's
<sorear> yeah, but those numbers don't always have a straightforward meaning
<OhGodAGuardian> Bob_Dole: An XC7A100T has 63,400 LUT6s
<OhGodAGuardian> I know, I have one
<Bob_Dole> is it? I must have misread the datasheet
<OhGodAGuardian> 15.85k slices
<sorear> there's at least one FPGA line where they switched from 4-input to 6-input LUTs and then multiplied the numbers in the part names by 1.4 to try to get some consistency
<Bob_Dole> I was mostly looking at it to try to compare its logic resources to the ECP5
<OhGodAGuardian> Bob_Dole: probably not a good idea to try and compare cross-arch like that
<OhGodAGuardian> like, some Lattice ones have LUT4s, but I can make one LUT6 2x LUT3s.
<Bob_Dole> eh, I was going for "general idea" and knowing LUT4 is a little less versatile than LUt6
<OhGodAGuardian> I can also make up to 2x LUT5s, but they gotta share inputs
<OhGodAGuardian> Cool, though
<Bob_Dole> all of the lattice ones that matter to me have LUT4, and LUT4.
<OhGodAGuardian> One out?
<Bob_Dole> not sure on the outs
<OhGodAGuardian> if it can only do one 4-input boolean function (and not, say, 2x2 input ones), then likely one out
<sorear> unfortunately lattice doesn't have a datasheet saying exactly what is in the native lattice logic cell (the siliconblue logic cell is very different)
<OhGodAGuardian> I like how straight up Xilinx is - an engineer even commented once on the internals of a CARRY4 primitive
<Bob_Dole> Ice40 has LUT4 but yeah I was seeing it's a little different in what it can do compared to the ECP5's LUT4s
<Bob_Dole> but actually -using- them is... getting a bit outside of what I know how to do.
<Bob_Dole> >.>
<Bob_Dole> I do research to figure out what SolraBizna could probably pull off with specific parts, or to serve specific needs in making something.
<OhGodAGuardian> makes sense
<Bob_Dole> I find things that fit as Candidates for a Task and then forward the candidate parts to him.
<OhGodAGuardian> tbh, sounds like the dude is at home with doing his own gate logic
<OhGodAGuardian> go get him a foundry.
<Bob_Dole> I've looked at it
<Bob_Dole> I don't have the 30-300k for Starting prices on old machines that come up for it.
<Bob_Dole> but there's someone on twitter that's working on making his own that'd probably be more like 10k
<azonenberg_work> OhGodAGuardian: Xilinx's part numbering, for the LUT6 based parts, is pretty screwy
<Bob_Dole> Jerri Elsworth was showing off a process that'd work if you only need tens of transistors
<OhGodAGuardian> azonenberg_work: Agreed.
<Bob_Dole> but I could build what she was doing
<azonenberg_work> the tl;dr is that the number is a (rounded) gate capacity for the LUT fabric only (hard IP excluded, some older stuff like spartan3 did included the hard multipliers etc)
<azonenberg_work> expressed in terms of "logic cells"
<OhGodAGuardian> But fuck, the idea of using the WHOLE FUCKING LINE as basically the same CLB design (almost)
<azonenberg_work> An XC7A100T does not have 100,000, or approximately 100,000, of anything
<OhGodAGuardian> is AWESOME
<OhGodAGuardian> azonenberg_work: that I knew
<OhGodAGuardian> the LC bullshit
<azonenberg_work> It has a capacity of approximately 1K LUT4+FF equivalents according to their formula where 1 LUT6+2FF -> 1.6 logic cells
<OhGodAGuardian> logic cells mean diff things to diff vendors, though
<azonenberg_work> So actual capacity is (roughly) 100K / 1.6 LUT6s
<OhGodAGuardian> so if it's not portable
<OhGodAGuardian> wtf is it worth
<azonenberg_work> Then for ultrascale, they started trimming off some zeroes
<OhGodAGuardian> lol
<azonenberg_work> i think one zero
<azonenberg_work> because an XCKU040 is roughly 400K LCs
<azonenberg_work> For ultrascale+ they trimmed off two zeroes
<azonenberg_work> so an XCKU3P is roughly 300K LCs
<azonenberg_work> about the only constant is that bigger numbers within the same line are bigger fpgas :p
<OhGodAGuardian> Well, if I'm looking to buy
<OhGodAGuardian> I pay attention
<OhGodAGuardian> They could do Intel-style codenames for all I care
<OhGodAGuardian> although I personally find XC7VX690T easier on the memory.
<sorear> orrrr you could get 40 25000-LUT4 LFE5U-12Fs for $5 ea on the part sites
<Bob_Dole> I was trying to figure out how ECP5 speed grades compared to the Artix7's too
<Bob_Dole> and that one, I did not find.
<OhGodAGuardian> And have to put them on my own board, which will be larger than life, and then, on top
<Bob_Dole> at all
<sorear> yeah, that'd be good to know
<OhGodAGuardian> once you go off-chip
<OhGodAGuardian> shit gets sloooow.
<sorear> i doubt what you're building cares about latency
<azonenberg_work> OhGodAGuardian: they actually *do* have codenames internally
<Bob_Dole> OhGodAGuardian, the other day I jokingly suggested over 9000 of the 85k LUT4 ECP5 fpgas. and then we started talking about viability of actually oing it for some reason
<OhGodAGuardian> sorear: Actually, what I'm building atm does not
<azonenberg_work> they just dont share them publicly most of the time
<OhGodAGuardian> azonenberg_work: ah, cool
<azonenberg_work> the ones i know mostly came second or third hand
<azonenberg_work> the ACAP is Everest
<sorear> we talking about BladeRunner now?
<OhGodAGuardian> sorear: however, what I plan to do
<OhGodAGuardian> is get a PCIe SSD
<azonenberg_work> 7 series is Fuji
<azonenberg_work> Ultrascale is Olympus, Ultrascale+ is Diablo
<sorear> Bob_Dole: i mean, this is a project I've actually thought about, so I shared a few of those thoughts when you asked
<OhGodAGuardian> and abuse it to attempt to transcode a shitton of smut using it, and output over its 2 SATA 3 6Gb/s
<Bob_Dole> ah
<OhGodAGuardian> I got this board cause I can never run outta fun shit to do
<OhGodAGuardian> last Xmas
<Bob_Dole> I had thought about multi FPGA board in the past for being able to make use of smaller FOSS-Supported FPGAs but never over 5
<OhGodAGuardian> I'd love one
<Bob_Dole> didn't think there'd be any chance of getting good interconnects between them going over that
<azonenberg_work> Bob_Dole: I have plans i need to dust off at some point for a Eurocard FPGA cluster called MARBLEWALRUS
<OhGodAGuardian> Bob_Dole: Treat them like super-multiprocessors! :D
<azonenberg_work> It used PCIe connectors on the backplane but that was just for convenience, it wasnt actually PCIe
<azonenberg_work> The main interconnect was gigabit ethernet on a switched fabric to everything
<Bob_Dole> me and SolraBizna's plans for things would be generally using SIMM/DIMM Cards
<azonenberg_work> Then I had a ring bus of differential pairs that I could plausibly use for high speed interconnect, it was just me using free pins for whatever
<azonenberg_work> But it was not intended for high bandwidth supercomputing type stuff
<azonenberg_work> it was targeting cloud-type environments for unit testing of IP cores
<Bob_Dole> though I also was pointing him at old ISA slots for durable connections
<azonenberg_work> The idea was to spin up an "artix7-large" instance on cue and run test stuff on it over a LAN
<azonenberg_work> With a socket interface to jtag, uart, etc
<azonenberg_work> basically a built-in starshipraider attached to each card as an "IPKVM" equivalent
<Bob_Dole> (anything we work on is for durable simple designs or bringing unusual but interesting things into reality)
<OhGodAGuardian> would be cool
<azonenberg_work> So any communication between the cards was pretty much an afterthought
<azonenberg_work> it was meant for running many independent designs
<Bob_Dole> though SMP 65816s might be going outide of Simple
<Bob_Dole> also I thought eurocard was a standard slot for backplane, like the 100pin things?
<azonenberg_work> No
<azonenberg_work> my understanding is that the eurocard standard only specifies the rack and card mechanical dimensions
<azonenberg_work> There is a *common* backplane used for VME
<azonenberg_work> But the standard doesnt mandate it
<azonenberg_work> I was using PCIe ... x1? i think, for nodes
<azonenberg_work> using the PCIe data pairs for ethernet
<azonenberg_work> same power and ground pins as pcie
<azonenberg_work> same jtag
<Bob_Dole> a lot of my thoughts I've had for what slots to use was "not being common for modern thing, but still based on older standardized slots" but a lot of that got hattered looking at mouer
<azonenberg_work> then the ethernet switch and management cards used x16
<Bob_Dole> A lot more connectors are readily available than I thought
<azonenberg_work> I put some effort into being compatible-enough with pcie that a bad connection would be non-fatal
<azonenberg_work> same voltage levels on critical signals, 12V power and grnd in the same spots, etc
<azonenberg_work> That said, my eurorack and a pcie computer case are very different
<Bob_Dole> I think jameco i the place to go to actually buy the slots, and not mouser
<azonenberg_work> and i did engineer it so that you physically could not fit a pcie card into my rack
<azonenberg_work> One of my cards might fit into a PC but it would stick out of the top of the case
<azonenberg_work> iirc
<Bob_Dole> that's always handy
<azonenberg_work> actually no
<azonenberg_work> i think i mirrored the connector
<azonenberg_work> so my card would have physically intersected the back side of the case
<sorear> if you're doing ecp5, you want to use the ordinary I/Os for inter-chip communication, they're slower than the SERDESes but _far_ more numerous and also better latency
<Bob_Dole> I've not had the consideration of rack/system-casing
<azonenberg_work> it would have been pretty hard to abuse
<sorear> the ecp5 serdes has a latency of ~13 *words*
<Bob_Dole> ouch
<azonenberg_work> that isnt unreasonable for a serdes
<azonenberg_work> have you ever tried designing one? do you know how much fluff there is in all the line coding, clock domain crossing, etc?
<azonenberg_work> (and do you know what the latency on a gigabit ethernet PHY is?)
<Bob_Dole> none of the above. (that's SolraBizna's job.)
<Bob_Dole> mine is "read what people are doing, find pitfalls people talk about, figure out what is probably possible. solder everything together once something is made."
<Bob_Dole> s/made/designed/
<sorear> i'm not calling it unreasonable
<sorear> i don't, but i'm curious
<azonenberg_work> also i wonder what the latency of tragiclaser is
<sorear> ethernet has a semi-unavoidable transmit latency of "oops, I can't send this until I finish the max-MTU packet I just started sending"
<sorear> many higher level protocols have error correction and detection which add significant latency
<sorear> etc
<OhGodAGuardian> ypu
<OhGodAGuardian> *yup
<sorear> i've tried to design parts of a serdes but the thing as a whole is beyond me
<azonenberg_work> sorear: that is a queueing latency
<azonenberg_work> Not eth specific
<Bob_Dole> why did I read that as quenching latency?
<OhGodAGuardian> MTU a bitch.
<sorear> i'm a little confused about gearboxing; there are two obvious ways to do word alignment, either with a pipelined barrel shifter (adds latency) or by repeatedly glitching the clock until it lines up right (requires more analog logic)
<azonenberg_work> i like the shifter option
<azonenberg_work> since it is digital :p
<Bob_Dole> I suppose you prefer digimon to pokemon?
<sorear> CDR PLLs are a thing I can sort of follow but don't have a good handle on
<OhGodAGuardian> digital best.
<Bob_Dole> OhGodAGuardian, Renamon or Blaziken?
<OhGodAGuardian> Bob_Dole: Not too much of a fan of either, BUT
<OhGodAGuardian> More good Renamon smut than Blaziken.
<OhGodAGuardian> Which... isn't saying much
<OhGodAGuardian> as 99.9% of it is terrible.
<Bob_Dole> that's true, but I think Renamon has been around longer for the smut making
<OhGodAGuardian> True
<OhGodAGuardian> Compare to Krystal - Rareware pretty much INVENTED this.
<OhGodAGuardian> More time - more work.
<azonenberg_work> sorear: well for tragiclaser i didnt do that either
<azonenberg_work> because the spartan6 pll cant lock to a data waveform, it needs a steady clock
<azonenberg_work> (at least, i didn't try abusing it to do that)
<azonenberg_work> So i just oversampled
<azonenberg_work> at 500 MHz for 125 Msps
<azonenberg_work> 125 Mbps*
<azonenberg_work> i ended up using the same 500 MHz clock on the TX side for pre-emphasis
<sorear> one of the parts of a serdes is a special kind of PLL that can tolerate weird duty cycles and a high proportion of missed transitions
<sorear> i've read one paper about them, i'm not an expert
<azonenberg_work> yeah and it makes sense in an asic
<azonenberg_work> my point is, if you're diy'ing it in an FPGA
<azonenberg_work> You dont have the luxury of custom pll layout :p
<azonenberg_work> i guess i basically ended up with a DLL in fpga fabric
<sorear> my biggest point of confusion with PLLs in general is "if the local oscillator and the input differ by >100%, the output of the phase comparator will be essentially noise, so how do you start to move the locl oscillator in the right direction?"
<sorear> i saw a cute paper a while ago where someone did a synthesizable ASIC PLL; a bunch of CNOT gates wired in parallel = an inverter with a large gate capacitance and a digitally controlled drive strength, put five of those in a ring to make an oscillator with a digital frequency control
<sorear> no obvious analog in FPGA fabric since you can't use fabric signals to turn buffers on and off
<azonenberg_work> Yeah you cant make a true PLL in FPGA fabric
<azonenberg_work> the closest you could get would be a DPLL with a counter that you increment/decrement the period of
<azonenberg_work> to emulate a charge pump
<azonenberg_work> But that requires a reference clock at say 256 times the nominal PLL NCO frequency
<azonenberg_work> so its useless for anything fast
<azonenberg_work> sorear: and i think it's normally a phase-and-frequency detector
<azonenberg_work> If you get two or more edges of the refclk or vco clock in the time you get one of the other
<azonenberg_work> then you know you're way too slow/fast
<azonenberg_work> and adjust accordingly
<azonenberg_work> once you're in the right ballpark frequency wise, you then tweak phase to achieve full lock
<sorear> mm
_whitelogger has joined ##openfpga
luvpon has joined ##openfpga
m4ssi has joined ##openfpga
GuzTech has joined ##openfpga
<cr1901_modern> sorear: >"if the local oscillator and the input differ by >100%, the output of the phase comparator will be essentially noise, so how do you start to move the locl oscillator in the right direction?"
<cr1901_modern> AIUI, this is called "cycle slip". http://www.delroy.com/PLL_dir/FAQ/faq_cycle_slip.txt
<Bob_Dole> not sure if it's of interest to anyone here, but I'm sure you saw me mention the Acorn CL101. Acorns are a series of FPGA boards being targeted at miners, connect to the m.2 slot on motherboards, may be useful for other things. Shouldn't be that expensive vs. other mini pcie or pcie slot fpga dev cards.
<azonenberg_work> Bob_Dole: what I *am* looking for that might be vaguely similar
<azonenberg_work> is a blade cluster or other system for low end arm socs
<azonenberg_work> i want something like 4-16 separate arm socs with associated ram and ethernet in a couple of U
<cr1901_modern> sorear: I used to know more about this... I would suggest reading Gardner's Phase Lock techniques. I read it about once a year to refresh
<Bob_Dole> azonenberg_work, have you looked at the EOMA68?
<azonenberg_work> no whats that
<sorear> aaaaaaaaaa
<Bob_Dole> it isn't shipping yet but the current "expected ship date" is this month
<azonenberg_work> Bob_Dole: looks ike a laptop?
<azonenberg_work> i want rack mount specifically
<azonenberg_work> goal is to have a web, dns, and a few other servers
<Bob_Dole> It is a compute card that has a laptop for it
<azonenberg_work> low traffic
<azonenberg_work> sitting in a rack that run for hours on UPS
<Bob_Dole> uses a 68pin PCMCIA interface to connect to things, to route various interfaces out for connecting ethernet, usb, etc. to it
<Bob_Dole> er pcmcia connector*
<Bob_Dole> not logically/electrically pcmcia
<Bob_Dole> SO You could probably make your rack backplane to just plug cards into
<sorear> yes, but then you'd have to interact with luke and you do not want to do that
<azonenberg_work> I would also have to do the board myself
<azonenberg_work> i want something ready to go
<Bob_Dole> SolraBizna, how would you feel about making a card backplane for EOMA68 standard?
<Bob_Dole> wait there might be one already
<Bob_Dole> sorear, I imagine that since the "Standard" is Open, and he was doing good at keeping updates about roadblocks and development around them.. if you don't pre-order and make simpler stuff like bakplanes yourself, interacting with the developer himself wouldn't be necessary.
<cr1901_modern> Where have I seen the person behind EOMA68 before? ...
<SolraBizna> you said my nick like 10 times
<SolraBizna> why did only this last one hilight me?
<Bob_Dole> wat
<Bob_Dole> odd
<sorear> he's the guy on the risc-v mailing list protesting against codes of conduct and attempting to hijack the vector extension work
<Bob_Dole> and I thought more like 3 times
<SolraBizna> 10 is more like 3 than 1
<SolraBizna> *than it is like
<cr1901_modern> oh that's right, so he's an asshole basically
<cr1901_modern> I remember something about that coming up
<Bob_Dole> has he been doing the vector stuff? I thought it was one of the other vocal folks on twitter mostly pressing for vector extensions the powers-that-be didn't like for whatever reason
<sorear> with the caveat that I've really not had bandwidth for riscv stuff recently (homeless for 6 of the past 12 months) there are a few major players
<sorear> Berkeley Architecture Research: Krste Asanović wrote a thesis on single-chip vector microprocessors in ~1999; the group *created* riscv to handle the non-research aspects of prototyping vector designs, the most recent of which was Hwacha (hwacha.org); there is a "V" draft in the spec repository which is heavily aligned with them
<sorear> hwacha has a few weird/rough edges (like using two instruction pointers to fetch vector vs. scalar instructions), they're defining V to be fewer-rough-edges
<cr1901_modern> Oh good... b/c we need a 2d program counter in stuff besides esoteric VMs
<SolraBizna> 2D program counter
<SolraBizna> Gonna have nightmares tonight
<sorear> Andes Technologies and PULP have SIMD designs using integer registers in their cores, which haven't really been adopted by anyone else
<sorear> VectorBlox has something kind of similar but a memory-memory architecture, not using vector registers; however, they exclusively do soft cores, while Andes and PULP are ASIC teams
<Bob_Dole> my thought mostly was the eoma68 -should- be available soon, unless there's ANother delay, and it's an existing "standard" for small low-power SBCs that seems like once there's card availability, it'd just be a matter of making the backplane. and then others could make additional cards.
<sorear> lkcl is pushing a "simplev" design based on ~magic registers~ that change the behavior of every existing instruction which afaict has not been implemented by anyone
<Bob_Dole> only other SBC thing I'm familiar with is.. gumstick? been a while since I was looking at that
<Bob_Dole> SBC-with-backplaneable-interface*
<cr1901_modern> PC-104? Repurpose ISA?
indy has quit [Quit: ZNC - http://znc.sourceforge.net]
<sorear> the topic of creating a "P" working group to handle fixed-length SIMD using float registers has been broached, unclear where it stands
<sorear> i'm not a Member yet anyway
<Bob_Dole> it really has been a long time since I looked at gumstix, oi
indy has joined ##openfpga
<azonenberg_work> Bob_Dole: yeah i know lots of SBCs and COMs exist
<azonenberg_work> my quesiton was about prebuilt systems :)
<azonenberg_work> basically a 1U chassis with a bunch of COMs in it
<azonenberg_work> or SBCs
<Bob_Dole> Ah, figured you were looking for ideal cards, without expectation of rack supporting a lot of them to pre-exist.
<Bob_Dole> I've never seen anything targetting ARM-SOC level parts for that, pc104 being the closest but I think that's mostly all x86 things using stuff like the voretx86?
<Bob_Dole> vortex86*
<azonenberg_work> Bob_Dole: no i am looking for a ready to deploy solution
<azonenberg_work> specs of the nodes are almost irrelevant beyond "ethernet, runs linux"
<azonenberg_work> one of them will do nothing but run a low traffic BIND instance and a sshd for management
<azonenberg_work> Another will run a static-content web server
<azonenberg_work> etc
<Bob_Dole> I'll keep my eyes out for something then
<azonenberg_work> performance of all of them will be mostly limited by my ~20 Mbps upstream bandwidth thanks to comcast
<Bob_Dole> It hasn't been a type of thing I was actively looking for, and generally avoiding the x86 stuff that I -think- I've seen closer to it.
<Bob_Dole> (because x86 is boring!)
<Bob_Dole> I was a little more interested in odd x86 chips back in '08
<azonenberg_work> so far this is my leading candidate
<azonenberg_work> fits up to four pi's
<azonenberg_work> they make a 20-node 5U version as well but i dont need that much
luvpon has quit [Ping timeout: 276 seconds]
<sorear> wasn't expecting monolithic inductors
<florolf> azonenberg_work: similar, but more shoddy: https://www.pine64.org/?product=clusterboard-with-7-sopine-compute-module-slots
<Bob_Dole> a lot of my Dumb Ideas stem from using a 400mhz UltraSPARC IIi as my primary system for a couple weeks back in '11. There wasn't much it couldn't do, excluding awful resolution on the 8mb ATi Rage II+DVD soldered onto the mobo, and some web related things. I thought, those PC-onPCI-cards would be great to be able to run a couple of these things in when I need it.
GuzTech has quit [Quit: Leaving]
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
pie___ has joined ##openfpga
pie__ has quit [Ping timeout: 244 seconds]
<felix_> huh, isn't that eoma68 thing pretty much dead? last time i looked at that was maybe 1.5 or 2 years ago and even at that time it seemed to be mostly legacy stuff
<felix_> yeah, no pci-e, no displayport, no ethernet and only 2 usb ports... .oO(avoiding planned obsolescence by making things that are already obsolete)
wpwrak has quit [Ping timeout: 252 seconds]
<gruetzkopf> yeah, intentionally 24bit parallel RGB only..
<gruetzkopf> the correct/modern approach would be "route as many HS-lanes as possible to the card edge"
<felix_> yep
<gruetzkopf> PIPE helps you a lot there
<whitequark> pipe?
<gruetzkopf> the fact that basically all modern communication protocols (sata, sas, eth, pcie, usb3, dp, ...) use differential SERDES IO was noticed, there's a spec for the one-size-fits-all phy and its config
<gruetzkopf> stuff like "how to configure preemph, deemph, voltage swing, link rate, encoding"
<felix_> wasn't the pipe interface the interface to the PHY though?
<azonenberg_work> displayport is not pipe based, is it?
<azonenberg_work> neither is sata or sas
<azonenberg_work> pcie and usb3 are
<azonenberg_work> also ethernet is iffy depending on your PHY
<azonenberg_work> base-KX / KR is straight differential serdes, as is the bus to a SFP module
<azonenberg_work> base-T is very much not :p
genii has joined ##openfpga
<felix_> the high speed phys on intel chipsets can speak pcie, sata, usb3 and displayport, but not ethernet
<whitequark> gruetzkopf: oh wow nice
wpwrak has joined ##openfpga
<whitequark> felix_: they speak "sort of ethernet" right?
<whitequark> half pcie rate and they need a special converter
<whitequark> but that's for 1gbase-t
<azonenberg_work> whitequark: i was just talking to somebody about that
<azonenberg_work> as best i can tell it's half rate pcie line coding moving sgmii-esque raw ethernet frames
<whitequark> yep
<whitequark> thats my understanding
<felix_> yep, they speak some half-rate pcie to special phy chips for ethernet
<felix_> haven't looked at the protocol though
<felix_> so yeah, i should have added "without some extra chip"
Bike has joined ##openfpga
rohitksingh_work has quit [Read error: Connection reset by peer]
<gruetzkopf> i haven't seen any documentation on the 10GE ports current AMD CPUs are supposed to have
rohitksingh has joined ##openfpga
emeb has joined ##openfpga
Miyu has joined ##openfpga
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined ##openfpga
<felix_> uh oh, it seems that there is no toolchain for the coolrunner2 chips that runs under windows 10 :/ has anyone tried ise 14.7 under wsl? for vivado that works, but i haven't heard if it'll also work with ise
<Bob_Dole> felix_, EOMA68 last had an update about 3 months ago. and it's not that much more limited in expansion options than the direction normal laptops are going, except you don't have to throw out the screen and keyboard when you upgrade.
<Bob_Dole> if you want to do more than use it "as a computer" that does have more than a few drawbacks
genii has quit [Ping timeout: 252 seconds]
<Bob_Dole> as a netbook/smartop/smartbook*
<Bob_Dole> unrelated: with various CPUs all using old AMD chipsets for their chipsets, has anyone tried to use things like Slot A motherboards that have working coreboot code for their own non-x86 cpus? (slot-a because it's a card. can fit vrms and what not on it as needed.)
<felix_> i can connect two 2k displays to my two laptops which also have 16gb of ram each ;)
<felix_> i don't remember coreboot having support for that generation of amd processors
<Bob_Dole> they don't know
<Bob_Dole> they stripped out a lot of old stuff
<Bob_Dole> now*
<Bob_Dole> bitrot and such
<felix_> yep, the k8 support was more or less unmaintained and iirc broken
* felix_ doesn't see a point in keeping broken and unmaintained code that prevents certain changes and optimizations of common code
<Bob_Dole> if the code exists, it's documentation and a guide for how to re-add it, if there's an application for it.
<Bob_Dole> if there's no application for it.. no need to readd it
<felix_> well, it's still in the git history :)
<gruetzkopf> Bob_Dole: the interface choices are makes are "interesting" to "nearly useless"
<gruetzkopf> even when it was started, 24bit parallel RGB was used only by the most low-end of displays, and it uses a lot of pins
<gruetzkopf> no processor that's anywhere near modern or desirable can directly output it, so you're converting on both sides of the card conenctor for an interface which eats nearly half your pins
<Bob_Dole> I don't think there's any processor near modern or desirable that has the option of not using proprietary blobs
<Bob_Dole> probably chosen for compatibility with un-modern processors like the A20 the "reference" card is being designed around
<Bob_Dole> I think I was talking with solra that the pcmcia connector seemed weird to me, wouldn't even some of the SCSI connectors offer as many pins and better designed for higher speeds?
<Bob_Dole> if not going card-edge for whatever reason
<azonenberg_work> felix_: ise 14.7 runs fine under linux :p
emeb has quit [Ping timeout: 246 seconds]
emeb has joined ##openfpga
<felix_> azonenberg_work: sadly my user experience of the linux desktop is still worse than on a modern windows or osx. i'll try installing ise in the wsl userland; at least that works for vivado, so i guess that it might be worth a try
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
<whitequark> cc azonenberg_work
<felix_> oh, nice (even though that makes the jtagmonster florolf and i started working on a few years ago sort-of obsolete)
<openfpga-github> [Glasgow] whitequark pushed 1 new commit to master: https://github.com/whitequark/Glasgow/commit/4afd2fd75b78ce8fc960b6f68c19fd472c85565e
<openfpga-github> Glasgow/master 4afd2fd whitequark: applet.jtag.pinout: new applet.
<florolf> felix_: i'd rather call that "abandoned working on a few years ago" :p
<felix_> well, started and then sort-of abandoned :/
<felix_> i still have some todos on my todo list for that project, but i guess that i can finally remove that project from my todo list
rohitksingh has quit [Quit: Leaving.]
<whitequark> felix_: heheh
<whitequark> it's really remarkable how easy this was
<whitequark> like it was literally 3 hours of work
<whitequark> from start to end
<travis-ci> whitequark/Glasgow#86 (master - 4afd2fd : whitequark): The build has errored.
<felix_> are you planning to crowdfund the final design of glasgow? it seems to be really useful for a rather broad range of applications
<felix_> Bob_Dole: on maintenance burden: i spend maybe 80% of the unpaid time i work on coreboot on maintenance and reviewing patches. so lowering the maintenance burden (and also the burden on developers) is always something i'd aim for; especially when the big bottleneck is maintainer time
<felix_> well, and usually the people that are most vocal against removing bitrotted stuff aren't the people who maintain and develop things...
m4ssi has quit [Remote host closed the connection]
<kc8apf> azonenberg_work: I've been looking for a similar blade chassis for my home kubernetes cluster. I push a lot of data on my internal network so I'm not fond of any solutions built around rPi with its USB ethernet. I picked up an ESPRESSObin (http://espressobin.net/) to see how useful it is as a single node. If it works out, I'll probably work out a blade+chassis design .
<Bob_Dole> felix_, I understand that. I did say, if there's already code for it, it's a guide to get it fixed back up if there's an application for doing so, but the specific Interest in that old code is more of how to support old chipsets on non-x86 processors if the code ever existed to begin with. I doubt there's much application for bringing it back as an x86 platform
<Bob_Dole> if there was, it'd still be there as a guide to re-add it, and if there isn't, no point in re-adding it
<Bob_Dole> I feel like old Slot A chipsets probably had some consideration for non-x86, as wasn't there some plan for the AMD chipsets to be available for DEC workstations (that never panned out in practice?)
mumptai has joined ##openfpga
<Bob_Dole> it just seems like those boards could offer a lot for a Platform, depending on a few variables including if there's Practical knowledge of supporting them at firmware level already documented somewhere.
<qu1j0t3> cyrozap: [bloomberg blowback] https://twitter.com/RidT/status/1053097782427422721
<kc8apf> ugh. my employer had to convince customers that they don't use Supermicro boards in products
<Bob_Dole> last super micro board I had was a dual skt370 that never actually worked
<qu1j0t3> you should have inspected it for spy chips
<sensille> doesn't matter, your routers are fully equipped anyway
<OhGodAGuardian> sensille: what if my FPGA is my router?
<OhGodAGuardian> (seriously, she's packing 4 Ethernet at 10Gb/s.) :D
<sensille> to be sure build from 74xx
<Bob_Dole> have you seen mycpu.eu sensille ?
<Bob_Dole> (because it's built from 74xx chips)
rohitksingh has joined ##openfpga
<qu1j0t3> what is going on over there
<prpplague> interesting
<sensille> oh. we had to do that at university
<sensille> building completely was too much, the years before had done that
<sensille> so they just introduced errors and we had to debug that monster
<prpplague> Bob_Dole: haven't seen that page before, nice read
<cyrozap> Ugh, now I understand why people hate compiling software--it's such a PITA to build Debian packages. The docs are scattered around on the internet, you need a specific directory structure to make it work, and good luck trying to backport a package from sid to testing...
<cyrozap> I've really been spoiled by Arch.
<sensille> ./configure; make all install
<kc8apf> cyrozap: build normally then use fpm
<cyrozap> sensille: Install directly to the system without a package? But then what will you do when you need to uninstall the software? Delete the files manually like a caveman? ;)
genii has joined ##openfpga
<sensille> just reinstall the system from time to time
<SolraBizna> just pretend "make uninstall" and "make deinstall" are widely supported and complete
<SolraBizna> in seriousness, if I'm installing something from source I think I might want to delete later, I install it in a subdirectory of /opt
<SolraBizna> all too often I encounter software that will install in /usr in spite of my attempts to install it elsewhere... I hate that
<qu1j0t3> sensille: :D
<Bob_Dole> the DDR3 mrams are kinda hard to find in single-quantities. In the future, think there's much chance of organizing a group-buy of them?
<Bob_Dole> like when the ecp5 has all the bits in place for using ddr3
<daveshah> Not sure if mram is a typo or not?
<daveshah> Can't find much evidence that ddr3 mrams are still available at all
<sorear> daveshah: no
<Bob_Dole> looked like they were on mouser still but non-stocked and no single quantities
<daveshah> Yes, factory special order is a bit ominous though
<sorear> is the specific part Bob_Dole and SolraBizna are talking about
<daveshah> I think getting a group buy of 380 would be tricky
<sorear> (my model still does not track you as separate people, sorry for this)
<sorear> tbh i'm not quite sure what the point is - 256Mib is *really* small
<sorear> other than (paraphrasing from earlier) "I have an irrational hatred of flash"
<sorear> i'd like something with flash chips and capacitors that can keep the flash powered for 2.2ms to finish an operation
<Bob_Dole> I'm still looking, somewhat passively, if anyone else has them
<SolraBizna> 256Mib would be enough to fully populate the natural address space of a 65816 twice over
<Bob_Dole> since there's no immediate ability or guarantee to be able to use them.
Bike has quit [Ping timeout: 256 seconds]
<sorear> i'm guessing you can get smol smd 1mF/2V supercaps now
s_frit has quit [Remote host closed the connection]
s_frit has joined ##openfpga
<SolraBizna> between acquisition problems and the near-certainty that it won't work at low clockspeeds, I should give up on the DDR3 MRAM and go back to a hybrid parallel MRAM/SRAM design
<SolraBizna> I wanted an all-nonvolatile memory space but I don't want to spend $360 on memory for a machine containing not even $100 in other parts
<daveshah> Could just go old school and go for low power CMOS SRAM with a CR2032
<SolraBizna> The only project requirement that hasn't been compromised yet is that this thing is supposed to be functional after 20 years
<SolraBizna> that rules out any battery technology I can get my hands on
<Bob_Dole> eh there's some nuclear batteries that might work. tritium is the usual for them but there's other sources that are Legal To Own
<Bob_Dole> you just have the problem of "it's lighly radioactive" and big. and needing specialized shieding.
<SolraBizna> these seem like much worse problems than we already have :P
<sorear> radiation hazards are vastly overestimated by the general public
<sorear> radiation isn't harmless but imo it's less scary than, say, mains
<Bob_Dole> yes, but the radiation will be on board with the components so if shielding fails it'll cause some extra errors
<SolraBizna> that'd be the thing I'd be most worried about
<SolraBizna> if the memory is supposed to be nonvolatile over long timescales, putting an ionizing radiation source next to it doesn't sound like a net win
<Bob_Dole> same. a respirator and hand-washing station would be enough protection from any radiation source I would Consider working with, but an IC can't wash its hands.
<SolraBizna> not to mention that we couldn't ship that by post
<Bob_Dole> I'm pretty sure you can if it falls under a certain activity level
<sorear> from a philosophical point of view, if you want confidence that a device will retain data after 20 years, you need to be looking at tech from <1998
<Bob_Dole> I could buy some uranium glass that'll excite a geiger counter on ebay just fine.
<Bob_Dole> or fiesta ware
<SolraBizna> as much as I wanted all-nonvolatile, I wouldn't actually make good use of that on a software level
<sorear> how about punch cards
<SolraBizna> I really should just hunker down, get one or two MRAM modules for "ROM", and fill the rest of memory space with volatile SRAM
* prpplague perks up with the mention of punch cards
<SolraBizna> give up on my dreams... ;_;
<prpplague> what are we discussing?
<sorear> punch cards in the right material should last millions of years
* prpplague reads the scroll back
<Bob_Dole> FALCONE PAUNCH cards
* prpplague is not sure what problem is being debated
<sorear> honestly I have no real idea what their goal is
<sorear> something related to bitcoin and retrocomputing, or possibly two projects
<SolraBizna> our goals are "I hate Flash" and "I hate it when electronics die of old age"
<SolraBizna> Bob_Dole is humoring my wish to make a hilariously underpowered invincible computer, in the hopes that he will trick me into doing enough of it through FPGAs that I can make him rich with bitcoin
<SolraBizna> which is working, so far
* SolraBizna glares at Bob_Dole
<Bob_Dole> I have little expectation of bitcoin
<prpplague> ahh
<sorear> make something free of semiconductors
<SolraBizna> lol
<sorear> core memory + mag amp logic
<prpplague> decimal based computing systems are easy to create with electromechanical systems
<Bob_Dole> I'm more interested in "I do not like the direction major corporations are taking modern computing into proprietary directions" and want something that is usable, and interesting, and exceptionally open, while also following and using enough standard components to be viable to produce
<prpplague> Bob_Dole: ahh
<sorear> what does aging look like for bulk copper and ferrites
<SolraBizna> and that is why I'm sitting on ~60% of a design for a 65816-based computer that can run Linux
<Bob_Dole> I also do have some interest in perhaps having a Product that can be sold to more than 3 people
<Bob_Dole> but I am uncertain of market segments that could be catered to, perhaps Amiga or (classic 68k) mac fans
<SolraBizna> honestly, I should see if I can successfully fit *my* goal into one MRAM module
<SolraBizna> more than one isn't needed if you're just going to load Linux from an SD card
<sorear> so are you going to compile linux for the 65816? interesting choice
<Bob_Dole> I've only suggested "a unix"
<Bob_Dole> Minix may be more suitable for a 65816 tbh
<SolraBizna> as part of meeting my own goals for this, I'm going to be making an LLVM backend for 65816
<Bob_Dole> wait, I wonder if minix is already available for the 65816..
<SolraBizna> I could then write my own terrible UNIX for it, but I don't really want to
<SolraBizna> sometimes I do, but Bob_Dole hasn't successfully taken advantage of that yet
<sorear> Linux assumes it has a 386, but historical unixes are fine with 16-bit machines with not-quite-flat address spaces
<sorear> i trust you've seen http://www.homebrewcpu.com/
<SolraBizna> If you're determined enough, you can pretend the 65816 has a linear 24-bit address space
<prpplague> fuzix which is a reboot of the original UZI is a much better choice for something like that
<sorear> the address space is not quite linear and you still don't have real PIC
<sorear> although the MMU you're hacking on might be able to deal with the PIC problem
* prpplague had UZI running on gameboy a decade ago
<SolraBizna> ugh, shared libraries
Bike has joined ##openfpga
<SolraBizna> fun fact: of all things, dynamic linking is the thing I would be least interested in implementing myself; not sure why
rohitksingh has quit [Quit: Leaving.]
<Bob_Dole> SolraBizna, you know, everything modern from intel runs Minix, since they used it for the Intel ME.
<prpplague> well the ME portion is on a separate core
<prpplague> so that isn't exactly correct
<Bob_Dole> it's the product and overlaps the memory address space, doesn't it?
<sorear> if you don't have a MMU and you want to have multiple text segments simultaneously resident, you need either PIC or a fancy loader that can apply relocations
<sorear> this is about multiprogramming, not shared libraries
<prpplague> Bob_Dole: no it has a separate memory space, and a completely separate core
<Bob_Dole> I knew it was a dedicated core, it's ARC isn't it?
* prpplague can't comment on it in detail - still under NDA
<SolraBizna> well, we do have an MMU, so... :D
<sorear> originally ARC, then SPARC, then x86 (Quark derived)
<jn__> the ME generations with Minix have an old x86 core
<prpplague> my NDA doesn't expire until next august
<Bob_Dole> oh I thought it was SPARC then ARC, didn't know they moved to actually having x86
<sorear> it also runs with the main memory system in power-down, so it clearly must be at least somewhat separate memory-wise, although nothing I know *rules out* having shared windows
<Bob_Dole> all I really know is I've never been fond of the idea it's mandatory
<zkms> SMM is the one that has memory that lives in main memory
<sorear> that it exists is pretty unremarkable. that it runs signed proprietary code from intel is more meh, although it's a strange thing to focus on when so much else is on fire
<sorear> idk why ME has become such orange website bait when the problem is the economic system and the massive conflicts of interest between intel and their nominal customers
<zkms> sorear: i think the "runs bad C code", "is connected to the network", "is quite nontrivial to disable fully (and can be re-enabled by malicious code)", and "has access to the AP and its peripherals that an implant would love" is sufficient, no?
<sorear> why do we care about that and not, say, the intel-provided platform initialization blobs that run as part of every bios (even coreboot)
<SolraBizna> I care >_>
<SolraBizna> one of the things that keeps me interested in my current project is that I can "know" all of the code, down to the individual bits
<sorear> afaik ~all large arm and ppc socs have management cores with similar capabilities, the only difference is that the code that runs on them is written by the integrator, not the soc vendor
<zkms> sorear: i mean; I care about those, but those only run once and don't parse arbitrary network data
<Bob_Dole> I care about those
<Bob_Dole> and that's why I want to get solra to make something.
<Bob_Dole> that is Totally Open
<Bob_Dole> a little annoyed by the ATi ATOMBIOS being a couple of kb that's totally closed while the rest of it is open
<Bob_Dole> too*
<zkms> a bringup core that sets up clocks and PLLs and such is one thing; a management core that has a non-removable link to the network and is literally /designed to be/ an execution context that the AP can't necessarily disable when started (it'd defeat the criteria for manageability) is another
<Bob_Dole> why, why is that tiny little chunk closed
<Bob_Dole> SolraBizna knows I occasionally bring up gpu related things too, because there's no Totally open things really available. >.>
<SolraBizna> I'm really excited by the HiFive Unleashed, it being the closest thing so far to a whole system that's "truly open" enough for me while still being "fast enough"
<sorear> is it actually better than a random gpu-less quad A7 soc?
<SolraBizna> IIRC, it's not as good as commonly available ARM SOCs at the same core count and clockspeed
<SolraBizna> but it's closing the gap
<SolraBizna> I think it's at least as good as the PPC750, which has been my favorite core to date
<sorear> I mean better in the sense of openness
<SolraBizna> ah
<Bob_Dole> isn't the A7 kinda Ass for the modern armv7 cores?
<sorear> most of the rtl is public, but I'm not sure how much that matters?
<sorear> Bob_Dole: the A7 is the closest Arm equivalent to the cores in the hifive unleashed
<Bob_Dole> Point.
<SolraBizna> when it comes to RISC-V vs. ARM, I am more excited by RISC-V because I feel let down by ARM
<SolraBizna> I was really excited about ARM in the early 2000s, but in practice the ARM ecosystem right now is in some ways worse than what we had before
<sorear> you mean like "the SBBR/SBSA came way too late", or other reasons?
<sorear> riscv extremely doubles down on the "no two chips will be quite alike" thing
<SolraBizna> It's full of closed architectures and NDAs and missing documentation and patents
<SolraBizna> that's true, but at least there's a core specification for floating point
<sorear> the hifive unleashed has complete documentation for the parts sifive designed, which is something
s_frit has quit [Remote host closed the connection]
<sorear> i wish they'd make up their minds whether the L2 RTL is or isn't coming out ever
s_frit has joined ##openfpga
<Bob_Dole> isn't that a Foundry related problem?
<sorear> arm has what, 4 core specifications for floating point? 4 times as good
<sorear> no
<SolraBizna> that's one of the ways ARM is twice as good as x86
<Bob_Dole> one of these companies couldn't release something because of the foundry NDA'ing their standard-cells too much
<sorear> the OTP, memory controller, PLLs, ethernet mac were licensed from third parties and so sifive is hamstrung in what they can do
<SolraBizna> though x86 has the "advantage" of implementing both FPU architectures at the same time... o_o;
<sorear> but the L2 cache is synthesizable verilog, it's no more encumbered than the core
<sorear> er, probably chisel
<SolraBizna> all I really want is an open core compatible with (and comparable to) the PPC750, and then I'll be happy for the rest of my life
<Bob_Dole> it was kinda weird thing to me, that they couldn't let people know how to make a design for their process without committing to making a design for their process.
<sorear> remind me what that is
<Bob_Dole> G3
<SolraBizna> (also Wii and GameCube core CPU)
<Bob_Dole> 74xx is g4, 970 is g5
<Bob_Dole> then you also got the 603 and 604 for earlier PPC chips that actually got any use
<SolraBizna> in a world where I haven't learned RISC-V assembly yet, PowerPC remains my favorite ISA
<SolraBizna> though I think ARM's conditional execution bits are cool, and 68k's ludicrous orthogonality is nice sometimes
<sorear> irrationally i hate ppc assembly syntax
<sorear> love to have registers and small immediates look exactly the same
<SolraBizna> speaking as a PPC fan... I don't think that hatred counts as irrational :)
<SolraBizna> I agree with the syntax that requires registers to be qualified with `r` and imediates with `#`, I forget if that's what gas used
<sorear> I don't have a strong opinion on it deeper than that, although most of the old riscs are kinda meh for PIC
<SolraBizna> I think PPC requires a GOT for PIC
<Bob_Dole> oh yeah speaking of 68k, I should go look into the Apollo core. 68k softcore meaning to be more compatible with the 68040 than the 68060 but faster than the 68060.
<sorear> GOTs are part of the unix definition of PIC, but only a few arches need to permanently reserve a register to hold a pointer to the GOT
<SolraBizna> SPARC has branch delay slots, which... yeah, I know they seemed like a good idea at the time, but seriously?
<SolraBizna> I forget why I hated MIPS
<sorear> mips and sparc both have delay slots
<Bob_Dole> didn't MIPS-X remove that?
<sorear> no, mips-x added a second delay slot
<sorear> jump; nop; nop;
<Bike> it's class all over again..............
<SolraBizna> .........
<SolraBizna> well, I guess that's less than twice as bad as one delay slot
<sorear> sparc was the architecture that inflicted register windows on the world
<Bob_Dole> I still have my SPARC box :D
<sorear> the oldest and worst-i've-seen implementation of delay slots
<Bob_Dole> SolraBizna, do you still have your SPARC system?
<jn__> i think Broadcom's "QPU" cores in the raspi shader unit have three branch delay slots
<sorear> user mode has SAVE for calls and RESTORE for returns, but longjmp() requires an effing syscall
<SolraBizna> o_O
<sorear> (mostly unrelated)
<SolraBizna> Bob_Dole: I think we sold it... it might be lost in the literal quagmire that is my parents' house's basement
<SolraBizna> register windows seem like they would be a good idea except for what happens when you run out of registers or need to context switch
<sorear> ye olde sparc also requires trapping into the kernel to spill old registers if you do calls more than 8 deep
<sorear> they made the hardware capable of doing that without a trap in v9 or so
<sorear> itanium also has register windows, except they "fixed" the sparc problems in a way that managed to be worse: there is a "register stack engine" which operates asynchronously with normal execution, and has its own privilege level which may or may not match the main pipeline privilege level at any given instant
<SolraBizna> :C
<Bob_Dole> how did Itanic last so long
<Bob_Dole> I have never heard anything good about it
<Bob_Dole> and yet there's HP still asking for me
<jn__> enough companies thought it was the next great thing
genii has quit [Remote host closed the connection]
<sorear> am29k has the best register windows, they put the register offset, high water mark, and low water mark in ordinary global registers that can be manipulated without fuss from user code
<sorear> the one good thing about ia64 is that l4ka::pistachio was able to get context switches down to 34 cycles
<sorear> but microkernel research in the post-spectre era is going to be Interesting
<SolraBizna> that's... impressive
<sorear> ia64 is extremely "second system syndrome" and "poor communication and poor management of complexity during the design phase"
<sorear> it has some curious features, and some curious lack-of-features
<sorear> ia64 has no 128-bit data path anywhere, but the hardware x86 decoder supports SSE2, mapping xmm registers to *pairs* of itanium (80-bit, natch) FPRs
<sorear> the design philosophy seems to have been "we're going to scale issue width to infinity, anything that could be expressed as independent instructions is unnecessary"
<SolraBizna> why not go all out and have VVVVVVLIW?
<sorear> rotating. predicate. registers.
<sorear> ia64 isn't a VLIW
<sorear> it's a multi-issue RISC trying to pretend to be a VLIW
<sorear> with 41-bit instructions
<sorear> a real VLIW would have a large number of slots dedicated to different functional units, exposed register file partitioning, and no bypass/interlock logic
<sorear> ia64's bundle mechanism could get rid of bypassing and interlocking in the large-bundle limit, but as soon as you need to have two bundles simultaneously in flight you need to worry about RAW/WAR/WAW
<felix_> ha, the linux version of ise 14.7 works under wsl
<pie___> felix_, is this going to be something extremely ironic like the linux version on wsl works better than the windows version or something
<sorear> i'm not a Highly Paid Architect but it seems to me like they could have done much better to require *two* stops before you are allowed to architecturally reuse a register, then the machine can work in parallel on any combination of instructions from two adjacent bundles without conflict checks
<sorear> still within the basic ia64 paradigm of "firehose of mostly independent ops"
<sorear> think about it, vectors are just compression for VLIW
<azonenberg_work> kc8apf: all of my stuff that needs more horesepower is going to be x86's with 10/40G PCIe NICs
<azonenberg_work> this is for internet facing stuff and i'm bottlenecked by the cable modem
<kc8apf> I mostly want storage servers
<kc8apf> planning 1 node per disk
<kc8apf> running Ceph
<azonenberg_work> ah, ok
<azonenberg_work> i'm thinking about upgrading my storage at some point
<azonenberg_work> right now its an i3 with two 4TB spinning-rust raid1's
<kc8apf> then a few x86 machines to do the bulk of the compute
<azonenberg_work> i've thought about ceph since NFS sucks
<azonenberg_work> My main use case is honestly though, one node per disk seems like massive overkill if you just want to be able to max out 10GbE
<kc8apf> depends
<kc8apf> Ceph is pretty heavy on RAM
<azonenberg_work> How heavy?
<kc8apf> I'm finding that these older AMD Sempron boxes with 2GB RAM are _just_ barely enough to run Ceph on Ubuntu reliably with a 2TB disk attached
<azonenberg_work> my current storage box has 16G for caching, if i was building a new one i'd likely go more (and make it ECC)
<azonenberg_work> And that's with 4TB of raid'd storage
<zkms> :3
<azonenberg_work> also i'm not super familiar with how ceph works at the low level
<azonenberg_work> does it run on raw disks/partitions or a mounted partition like nfs?
<kc8apf> raw disks/partitions
<azonenberg_work> Does it provide disk redundancy functions or should i run it on a md device?
<azonenberg_work> (if i want N+1 drive redundancy)
<kc8apf> it does replication
<kc8apf> and tries to account for hierarchical failure domains
<kc8apf> so it will try to spread the replicas (or RS code chunks) across nodes and racks
<azonenberg_work> So if i wanted redundancy i'd be best off with 2+ nodes
<azonenberg_work> with no raid per node?
<kc8apf> yeah, 3+ nodes is ideal
<kc8apf> assuming you put enough storage on each node to allow for a replica per node, you can tolerate 2 node failures
<kc8apf> If you run Kuberentes, Rook make setting up Ceph rather simple
<kc8apf> then you have distributed storage and compute
<azonenberg_work> I'm mostly interested in a filesystem i can mount on multiple client nodes at this point
<kc8apf> Ceph starts with a distributed block storage layer
<kc8apf> so you can provide raw block devices to clients
<azonenberg_work> So how do you store files on it and access from multiple client nodes? Do you need a single front-end server running (say) EXT* and sharing out by NFS or something?
<kc8apf> CephFS builds on top and provides a POSIX filesystem
<azonenberg_work> oh, ok
<azonenberg_work> So you run a CephFS client on each client device?
<kc8apf> or you can serve iSCSI
<kc8apf> yup
<kc8apf> there's another layer that can serve an S3-style object storage too
<azonenberg_work> Yeah i'm not looking for a SAN here, I just want redundant, high bandwidth filesystem over LAN to start
<azonenberg_work> when i do the buildout i'll be sticking a 10G NIC in each node but only running the fiber at 1G to start until LATENTRED is ready
<kc8apf> only way to get high bandwidth is to distribute blocks over multiple spindles
<azonenberg_work> (since I dont have any 10G switching)
<azonenberg_work> I was actually considering going full flash in the new deployment
<azonenberg_work> (Whenever that is - all this is contingent on me finishing construction in the new lab)
<azonenberg_work> So wait, I'd run one CephFS server and then export that over NFS?
<azonenberg_work> Not a CephFS client with each endpoint device talking straight to the SAN?
<kc8apf> can go either way or both
<azonenberg_work> I was hoping to get rid of NFS :p
<kc8apf> Ceph runs a daemon per disk to serve raw blocks
<kc8apf> CephFS runs a daemon that maintains metadata for a POSIX-style filesystem
<kc8apf> there's a FUSE filesystem for CephFS
<kc8apf> or you can expose it via NFS
<azonenberg_work> So you can have multiple CephFS clients mounting the same filesystem concurrently?
<kc8apf> yes
<azonenberg_work> Simultaneous access is my main requirement that I can't get with something like iSCSI
<kc8apf> including a mix of NFS + FUSE
<azonenberg_work> Also are you familiar with any of the other parallel filesystems like Lustre?
<azonenberg_work> That's what i had been looking at years ago when i started thinking about this