ChanServ changed the topic of ##yamahasynths to: Channel dedicated to questions and discussion of Yamaha FM Synthesizer internals and corresponding REing. Discussion of synthesis methods similar to the Yamaha line of chips, Sound Blasters + clones, PCM chips like RF5C68, and CD theory of operation are also on-topic. Channel logs: https://freenode.irclog.whitequark.org/~h~yamahasynths
<cr1901_modern>
I was doing cleaning today and I found some old pen-and-paper notes on the YM301x DAC voltages and converting them to something an FPGA can use.
<cr1901_modern>
My initial (original) Migen design from these notes was subtly wrong, but I should scan them or transcribe them anyway.
andlabs has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<kode54>
cr1901_modern: rather than each filter having effective latency
<kode54>
or rather, attempting to prevent in:out latency
<kode54>
each filter and input stage buffers somewhat of its old output
<kode54>
rewind expects to be able to recall that data
<cr1901_modern>
kode54: I vaguely recall that if you have the entire input ahead of time, you can filter the data forwards and then in reverse to get filtered output with zero latency.
<cr1901_modern>
But w/ sound you'll never have the entire input ahead of time
<cr1901_modern>
is this an attempt to get around that real time limitation?
<kode54>
sort of
<kode54>
my filter needs blocks of data at once
<kode54>
for an FFT filter with an input impulse of hrir_samples length
<cr1901_modern>
overlap add/save?
<cr1901_modern>
or unrelated?
<kode54>
save
<kode54>
overlap save
<cr1901_modern>
save was the one I understood better in uni, but it's been _years_
<kode54>
hrir_samples is scaled up by 11/8, then rounded up to a power of two, for the fftlen
<kode54>
audio is rewound by up to fftlen - hrir_samples, and a block of old and new samples equal to fftlen is pulled off
<kode54>
then fft is applied
<kode54>
(forward, cross multiply with the fft'd impulse, then fft back)
<cr1901_modern>
right
<kode54>
then the output only captures the last hrir_samples worth of output
<cr1901_modern>
I assume that's what you meant by "FFT filter"
<kode54>
yup
<kode54>
it's an FFT convolver
<kode54>
used for downmixing surround sound to stereo
<kode54>
the preset I was using is freaking 100ms long
<cr1901_modern>
I don't remember, but for small filters convolution computational complexity wins, but for large filters, the FFT -> multiply -> IFFT wins
<kode54>
Equalizer APO can somehow stomach that sort of latency, but Wine and other games in PulseAudio can't
<cr1901_modern>
at least, the TI DSP libraries _I_ dealt with did convolution naively
<kode54>
this filter used to do it naively
<kode54>
it also used to hard cap at 64 samples per impulse
<kode54>
(even the hrir sets that were distributed for use with it had 128+ samples)
<kode54>
this filter, with a 100ms impulse, is fine for common audio tasks
<kode54>
not good for low latency
<kode54>
the IIR based filter this person shared with me, that I made the basis for my FFT FIR plugin, works nicely
<kode54>
it only has block sizes of 1024 samples
<cr1901_modern>
kode54: To be clear, when I brought up acoustic impulse response last week, that was literally the first time I heard of the concept. So I have still a bunch to learn.
<kode54>
aha
<kode54>
well
<kode54>
to gather an impulse response from a digital effect, you just need to pass a single dirac pulse through the filter, and capture the output
<cr1901_modern>
there's multiple ways to do that
<cr1901_modern>
I think my favorite is "shoot a gun" and record the output
<cr1901_modern>
lmao
<kode54>
for real life IR, you need a test sound or set of test sounds, and you need to deconvolve the resulting sound against the original test sounds to produce an impulse response
<kode54>
yeah
<kode54>
but even for "shoot a gun" you need to deconvolve it a bit
<kode54>
since the original impulse signature is not a perfect instantaneous sample
<kode54>
dirac pulse is only possible with digital processing
<kode54>
so you can use dirac, a single sample pulse, to capture digital effects engines
<cr1901_modern>
Right, and that's about where my knowledge ends :(
<kode54>
I don't know how to deconvolve, really
<cr1901_modern>
I need to chew on this a bit more and formulate some q's before we continue, if that's okay
<kode54>
but I do know for convolution, you basically just multiply every input sample against the impulse response
<kode54>
that has the acoustic effect of simulating that response with whatever sample data you desire
<kode54>
FFT method is the fastest way to do it with large data sets
<kode54>
since instead of having to multiply hrir_samples per input sample and add them all together
<kode54>
er, add them all to output
<kode54>
instead, you fft once (linear load per size)
<kode54>
fft once the impulse on startup
<kode54>
multiply, which involves four multiplies, one add, one subtract, per pair of complex samples (fftlen/2+1)
<kode54>
complex numbers in fft data are "real" and "imaginary"
<cr1901_modern>
Yup, and that's why we spent the most time studying FFT in DSP. It is faster to get the same result.
<kode54>
"real" is the strength of that frequency band
<kode54>
"imaginary" is the relative phase of that frequency band
<kode54>
I had to learn how to multiply complex numbers
<kode54>
amazingly, Apple's vDSP has a function to do an entire array of them for you, accelerated