<GitHub135>
[llvm-lm32] jpbonn pushed 1 new commit to master: http://bit.ly/nT6SLj
<GitHub135>
[llvm-lm32/master] added note on --with-llvmgccdir - jpbonn
<mwalle>
qemu 0.15.0 is released with mm support :)
<larsc>
nice
<mwalle>
larsc: did you look into the thread switching bug?
<wpwrak>
regarding M1 demo videos, i wonder if it would be possible to generate the effects M1 produces in non-real-time in software. that way, we could have high-quality videos without needing a VGA capture device.
<larsc>
mwalle: i didn't had time yet :/
<wpwrak>
wolfspraul: btw, what is the problem with VGA capture ? seems that devices aren't *that* expensive. e.g., this one seems to cost around USD 300 and work with linux: http://www.epiphan.com/products/frame-grabbers/vga2usb/
<mwalle>
larsc: np, /me just trying to do a awatch on lm32_current_thread if its == 0
<mwalle>
wpwrak: mh, video caputure might be possible, but i dont know if you can feed the audio into qemu in a non realtime fashion
<wpwrak>
mwalle: maybe someone could add event-driven simulation to qemu ? ;-) i once this this with UML. it's kinda fun if you can do "date; sleep 3600; date" and it comes back immediately, but the time has indeed moved by one hour
<wpwrak>
of course, my hack had knowledge about the kernel's internal timekeeping, which qemu doesn't
<wpwrak>
s/this this/did this/
<mwalle>
this might be possible with the qemu user emulation :)
<mwalle>
wpwrak: instead of hacking qemu, you could create special input and output devices and make an application which reads the audio samples, renders the image and push that onto the output device
<mwalle>
of course that application would be a stripped down flickernoise/demo which hasnt any timebase
<larsc>
can't you just modify flickernoise to stream the video data?
<larsc>
or is that to much load?
<mwalle>
larsc: on real hw? i'd guess it would be too much throughput on the ram
<mwalle>
raw video with 1024x768@16bpp/25fps would be 37mb per second
<larsc>
and if it was non-realtime you'd have the problem with audio again
<larsc>
i guess
<mwalle>
yeah the audio adc provides a constant stream :)
<larsc>
on the other hand you could use a prerecorded audio file
<wpwrak>
yup. a sw renderer could just take, say, a WAV, and maintain its own idea of time
<Hodapp>
All of this talk of emulation and embedded stuff makes me want to shoot myself for being stuck here with SOAP and this overabstracted mess of Java web services...
<larsc>
haha!
<larsc>
;)
<Hodapp>
My job prior to this involved some embedded work; I think I greatly preferred it.
<Hodapp>
"Slow" here might mean something takes a couple hours. "Slow" there might mean that my interrupt was taking more than 100 nanoseconds.
<mwalle>
i hate webapps and gui programming :)
<Hodapp>
Qt isn't bad, but these web technologies are just horrid sometimes.
<Hodapp>
and I've just not had any chance at this job to work with lower-level things, and very little time outside of it.
<larsc>
at least you can't fry your hardware easily. I think I just somehow killed the board i'm working on :/
<wpwrak>
larsc: the final triumph over those bugs ? :)
<larsc>
it's rather annoying. the driver was basically done. i now i have to get the multimeter and find out what exactly is wrong. i can still talk to the controlport of the chips, but the outputs are clearly not driven
<wpwrak>
hmm. not a nice moment then. what peripheral is it ?
<larsc>
a sound chip
<larsc>
adau1373
<wpwrak>
hmm, odd that this one would be so easy to fry
<larsc>
yep, at least in theory all the outputs have overcurrent protection. maybe it's just a jumper that fell off...