<lekernel>
wpwrak, what do you think about a video input "color transformation matrix" controllable from the patch?
<lekernel>
also, I used LED lighting... doesn't help
<wpwrak>
yeah, video input modification could be very interesting. maybe even go one step further, and have a few FPUs that just work on that ? easy to parallelize, shouldn't need a lot of instructions/registers, and the existing FPU design is already perfect for SIMD
<lekernel>
what kind of transforms would you like to be able to do?
<lekernel>
or, more particularly, what data?
<lekernel>
one pixel? squares of 2x2 pixels? more? overlapping/non-overlapping?
<wpwrak>
i was thinking of just pixels. dunno if that's too technical a view, though
<wpwrak>
if you have >1 pixel, you could of course to edge detection and such
<wpwrak>
s/to/do/
<lekernel>
do you have a color transform in mind that would look significantly better than with using a matrix?
<wpwrak>
no. i'm just thinking of what could give us a maximum of flexibility with the available technology
<lekernel>
also, the color transform could also generate an alpha channel
<lekernel>
there's already alpha support down the pipeline, but it's only a global alpha
<lekernel>
that could be nice to make parts of the included images transparent
<wpwrak>
oh yes, that would be cool
<lekernel>
we'll need to overhaul the PFPU register allocator a bit... 4x3 matrices will eat registers like crazy
<wpwrak>
3x3 ? 27 registers if you have one for r, g, b each
<lekernel>
mh?
<lekernel>
the matrix does RGB->RGBA
<lekernel>
it's 4x3, 12 coefficients
<wpwrak>
aah, i was thinking of an PFU (array) that works on each pixel.
<wpwrak>
e.g., with a 3x3 RGB matrix on input, one RGBA output
<lekernel>
processing multiple pixels at once is more difficult... you can't just simply insert a stage in the TMU pipeline
<wpwrak>
just in the video in path, before it hits anything else ?
<lekernel>
the rendering system uses the TMU to blit the camera picture into the texture buffer
<lekernel>
if we put this matrix thing in the TMU, we have more flexibility and we factor things (e.g. supersedes decay, makes it possible to apply the transform on still pictures)
<wpwrak>
so you'd put it in the frame buffer feedback loop ?
<lekernel>
all camera pictures go through the TMU atm
<lekernel>
except for video-in preview in the GUI (and that's why it's slow)
<wpwrak>
i haven't looked at that part in detail yet, but is the camera->TMU interface a pixel stream ? or something more complicated ?
<wpwrak>
my FPU idea was to put an array of FPUs in the camera pixel stream, where they transform each pixel before it goes to the TMU for further processing
<lekernel>
the video-in core receives the signal, does the YUV->RGB transform and just DMA's the result
<lekernel>
you get raw interlaced framebuffers at the output
<wpwrak>
eek
<wpwrak>
interlaced is like the Terminator. damn hard to kill.
<lekernel>
there are two framebuffers, one for each field. you can also tell the video-in core that you want only one field (and this it what it does atm)
<lekernel>
well, we simply discard one field and scale it up vertically with the TMU :-)
<wpwrak>
aah, hence the low resolution ! :)
<lekernel>
yes, but deinterlacing without major losses of vertical resolution is a massive pain
<wpwrak>
okay, so at least with the current half-frame approach, such a pixel processor array ought to work. for a NxN matrix input, you'd need a (N-1)/2 lines + (N-1)/2 pixels buffer before the pixel FPU array
<lekernel>
difficult, easy to get wrong, and a memory bandwidth pig
<wpwrak>
oh yes, deinterlacing sucks :)
<wpwrak>
why a pig ?
<lekernel>
because deinterlacing algos can make a lot of memory accesses for things like motion estimation
<wpwrak>
aah, deinterlacing. okay. thought you were talking about the FPU array :)
<lekernel>
if you implement this processor array in the TMU, you can't choose the pixel access order
<lekernel>
you can implement it as a separate unit that operates in framebuffers, and then you are free to use any access order
<lekernel>
but it increases memory bandwidth consumption
<lekernel>
if you do it in the video-in core, you lose flexibility
<wpwrak>
and if it's between camera YUV->RGB and TMU ? or maybe even replacing YUB-RGB ?
<lekernel>
or, you can break away from the DMA paradigm and implement "pixel stream buses" :)
<wpwrak>
that sounds nice :)
<lekernel>
you have this pixel processing unit sitting somewhere on the chip, and you can route it between the video-in core and its DMA write unit
<lekernel>
or between a DMA-read and a DMA-write unit
<lekernel>
dynamically
<wpwrak>
that sounds very nice, yes
<wpwrak>
or maybe even have two of them. not sure how fat they'd get. the DMA-to-DMA one would have to be bigger than a camera-only unit, due to the larger amount of pixels.
<lekernel>
well, there are a lot of details. here's one :)
<lekernel>
and you'll need flow control on the buses anyway
<wpwrak>
for the DMA-to-DMA path, yes, sounds reasonable
<lekernel>
even at the video-in output on at the DMA-write input
<lekernel>
you cannot predict the memory access latency
<lekernel>
because of DRAM refreshes, switching DRAM rows, and bus sharing
<wpwrak>
hmm, but you'd already have to have a mechanism of this sort in place
<wpwrak>
the FPU array would just add a fixed delay to the camera pixel pipeline
<wpwrak>
"fixed" as in either worst-case (i.e., size of program space) or as in always the same for the same program, but variable across programs
<zumbi>
hi guys! do you have a pointer for the copyright license for milkymist hardware?
<rigid>
just saw the project. nice! (i love the transparent case :)
<rigid>
what s/w are you using? is this somehow related to milkdrop?
<rigid>
crawls the webpage
<rigid>
ah.. yep
<lekernel_>
hi rigid
<lekernel_>
own software, and yes, lots of milkdrop inspirations
<rigid>
lekernel_: looks very nice... and great its open
<rigid>
i'm looking for open audiovisualization software to use them with LEDs
<juliusb>
nice slashdot article
<juliusb>
lekernel_: I like your presentation about OS FPGA toolchains
<juliusb>
are you doing anything like that in the UK any time soon? I've moved to Cambridge, btw, and am planning on doing things with the OSHUG here - if you're not too far away you might like to come and get some of those guys interested