<mithro> I added some information around silicon sizes in the spreadsheet at https://docs.google.com/spreadsheets/d/1yUl9c6lxkYDN1bbEUWcBpHScqANnfPmNZjaOXRgHm_8/edit#gid=100732368
<mithro> tnt: I'm about 6 months ahead of you and still don't understand 90% of it :-P
<mithro> Hey, I never thought I would end up on theregister....
karimyaghmour has joined #skywater-pdk
karimyaghmour has quit [Ping timeout: 240 seconds]
<tnt> Trying to understand what that last spread sheet means. So you put 50 different designs (well really 42 since design is square and mask is not) in a mask, then you use that mask ~ 25 times on a wafer to fill it (so you get 25 times each of the 42 different ICs).
<tnt> What's "number of wafers": 22 ? That would mean you get 550 chips of each design ? (22 * 25)
m_w has quit [Ping timeout: 244 seconds]
_whitelogger has joined #skywater-pdk
m_w has joined #skywater-pdk
m_w has quit [Ping timeout: 256 seconds]
_whitelogger has joined #skywater-pdk
m_w has joined #skywater-pdk
m_w has quit [Ping timeout: 264 seconds]
_whitelogger has joined #skywater-pdk
karimyaghmour has joined #skywater-pdk
_whitelogger has joined #skywater-pdk
<mithro> tnt: Yeah roughly
<tnt> I thought we'd only get ~100 hence my surprise :p
<mithro> tnt: A full wafer lot is between 20 and 25 wafers
<mithro> @tnt I said O(100s) with minimum 100
<mithro> Plus there are packaging costs on top
<mithro> And we want about 100 of each harness design for analysis
<tnt> right.
<mithro> tnt: Underpromise, over deliver :-P
<tnt> I've been playing with yosys targetting the hd cell library and so far things are way smaller than I expected. So either I'm doing something wrong, or 10 mm^2 is actually quite a lot.
<tnt> Like a Vex (just random small config I use on the ice40, but still with 1 kbI cache implemented in DFF) is 0.7mm^2
<daveshah> A Cortex M3 is 0.4mm² according to https://groups.inf.ed.ac.uk/pasta/pub/A0-DEMOfest_Poster.pdf
<mithro> 10mm^2 is quite a lot
<mithro> ARM “minimum” Cortex-M0 typical configuration (including wakeup interrupt controller and debug access port)0.25mm2 @ 100MHz.
<mithro> tnt: The silicon size is mainly dictated by the number of IO
<tnt> Oh yeah, right I hadn't thought about that. How does that even work for a WLCSP, are the pads where the balls are like on the top meta layer ?
<mithro> tnt: Yes - otherwise the majority of the cost actually becomes packaging the ICs
<mithro> Hence why it's roughly 7 balls x 7 balls
<mithro> tnt: I would really like people to look at things like OpenFPGA and multi-project in a single IC
<mithro> IE if you are able to get one vexriscv configuration going, then you should dump like 5 variants into the same project or something
<mithro> tnt: Less manual work, more automatic stuff
<tnt> I'm just trying stuff out to get a feel for it. OpenFPGA looks like it might need to actually develop custom cells for the process, that's probably way above my head. ATM I'm trying to think of various things that can be put in there in a ways that doesn't conflict, form a somewhat coherent results, but can be isolated/disbaled/unused if it turns out that part doesn't work and not impact the rest.
<tnt> I'd love to try my hand at some custom cell but I'm afraid I'd mess something up that would cause latchup or short or something that would render the whole chip un-usable.
m_w has joined #skywater-pdk
<prpplague> tnt: hehe maybe start off with an unconnected layer with just a logo or something, hehe
<prpplague> tnt: hehe write "Hello World" in a metal layer!
m_w has quit [Ping timeout: 256 seconds]
<prpplague> mithro: if my custom design doesn't make use of the provided risc-v core and harness, are all 7x7 pins available for usage, or will there be some minimal number of pins reserved for the risc-v core?
<mithro> prpplague: You'll get about 40ish pins
<mithro> @tnt prga uses standard cells IIRC
<prpplague> mithro: i assume then some pins will be reserved for gnd/power and maybe jtag or some other debug method? uart maybe?
<mithro> prpplague: pwr, gnd, 4 for spi / debug / etc
<prpplague> mithro: dandy
<prpplague> makes sense
<mithro> Exact pin planning is still a WIP
<prpplague> mithro: yeah, figured as much, but i just wanted to make sure i had the "mindset" you guys were using before going forward
<mithro> There will be 2 set of pins designed for high speed digital / analog signals (gnd, tx_p, tx_n, vcc -- gnd, rx_p, rx_n, vcc)
<mithro> You can use them for plain old IO if you don't need the high speed stuff
<prpplague> dandy
<mithro> The aim for those high speed stuff is to do things like PCIe / DisplayPort / SATA / etc
<prpplague> yeah, diff pair stuff
<prpplague> serdes
<mithro> USB3.0
<prpplague> indeed
<mithro> Yeap -- the eventual goal is to have someone develop decent SERDES we can move into the harness and give people a high speed way to do an IO expander type thing
<mithro> IE connect your IC to something like an ECP5 / Crosslink / Artix 7 and get a lot of IO that way
<prpplague> understood, a well thought out process imho
<mithro> Been working on this for a while and stealing good ideas from other people who are a lot more experienced in this area
<mithro> Have people looked over http://bjump.org/ ?
<tnt> Oh nice about the high speed pairs. I think the final io capabilities is currently the thing I'm most curious about.
<prpplague> not i, that hasn't come up in my searches before, i'll pass that along to karimyaghmour and some others
<mithro> Sadly that wasn't a cost effective method for O(10k) ICs
<mithro> tnt: It's still an open question -- there are no datasheets for the IO cells we have been provided
<prpplague> interesting
<prpplague> mithro: yea i have for a while been looking at going through MOSIS for one of their shuttle programs, and had priced out a wire bonder to do Chip-on-Board packaging
<mithro> The academics are normally use to getting back maybe 5 ICs
<mithro> Which they *maybe* bond 1-2
<tnt> Just enough to write the paper :p
<mithro> Sadly, yes
<prpplague> surprisingly, you can get a refurbished wire bonding machine for about $5k , so they aren't that expensive
<tnt> I know at least here they have one and can bond chips if they need to, but since it's manual, they just do it as needed, they won't bother bonding chips they won't use.
<mithro> The operator who won't destroy or ICs is the more expensive part :-P
<prpplague> mithro: hehe yea
<mithro> But that doesn't scale to what I'm trying to do
<prpplague> mithro: exactly
<prpplague> mithro: we were just looking at 10 pieces, so very small scale
<karimyaghmour> There seem to be automated wire-bonding machines from chinese vendors, but I bet the instructions are in chinese as well ;)
<prpplague> mithro: i'm based out of Dallas/Ft.worth, so there are a bunch of local companies that work with Texas Instruments, so we have a resource of places that can do the bonding and epoxy coverings for CoB
<mithro> wirebonding the cost actually comes from the fact it's a fairly slow process
<prpplague> mithro: indeed
<prpplague> TI has a building in Plano that they have been trying to sell for like a decade
<prpplague> they've been using it for storage, so it's full of old manufacturing equipment
<prpplague> once in a while they do an auction and sell off stuff
<tnt> Mmm, looks like OpenRAM doesn't like large arrays.
<prpplague> are the standard cell muxes unidirectional?
<tnt> yes
<prpplague> figured
<tnt> they're not analog muxes
<prpplague> just wanted to make sure
<tnt> Did anyone see a foundry model for the mosfets ?
<mithro> tnt: They haven't been published yet
<tnt> ack
<mithro> tnt: They have been targeting 1 kbytes at the moment
<mithro> tnt: You can build bigger arrays by writing some verilog to combined them
<tnt> Do you know what the limitation is ?
<tnt> Ah looks like the number of words per row is something that needs manual design, so there is only support for 1/2/4/8
m_w has joined #skywater-pdk
<mithro> tnt: Best to ask matt
<mithro> FYI there is a slack at skywater-pdk.slack.com -- I'm unclear the status of getting public invites open
prpplague has quit [*.net *.split]
prpplague has joined #skywater-pdk
_florent_ has quit [*.net *.split]
_florent_ has joined #skywater-pdk
tnt has quit [*.net *.split]
tnt has joined #skywater-pdk
anuejn has quit [*.net *.split]
anuejn has joined #skywater-pdk
m_w has quit [*.net *.split]
m_w has joined #skywater-pdk
miek has quit [*.net *.split]
miek has joined #skywater-pdk
daveshah has quit [*.net *.split]
mithro has quit [*.net *.split]
futarisIRCcloud has quit [*.net *.split]
Stary has quit [*.net *.split]
futarisIRCcloud has joined #skywater-pdk
daveshah has joined #skywater-pdk
mithro has joined #skywater-pdk
Stary has joined #skywater-pdk
mithro has quit [Max SendQ exceeded]
futarisIRCcloud has quit [Ping timeout: 244 seconds]
vup has quit [*.net *.split]
vup has joined #skywater-pdk
vup has quit [Max SendQ exceeded]
vup has joined #skywater-pdk
mithro has joined #skywater-pdk
m_w has quit [*.net *.split]
m_w has joined #skywater-pdk
futarisIRCcloud has joined #skywater-pdk
<prpplague> mithro: nice!
<prpplague> mithro: RAM block?
<karimyaghmour> mithro: thx for the Slack channel pointer.
<tnt> mithro: is that from openram ?
<mithro> Yes
m_w has quit [Ping timeout: 264 seconds]
m_w has joined #skywater-pdk
m_w has quit [Ping timeout: 246 seconds]
<mithro> I would really like to see if we can get fully open source ReRAM based FPGAs
<prpplague> oh that's interesting