lekernel changed the topic of #m-labs to: Mixxeo, Migen, MiSoC & other M-Labs projects :: fka #milkymist :: Logs http://irclog.whitequark.org/m-labs
mrueg has joined #m-labs
Hawk777 has quit [Quit: ZNC - http://znc.in]
Hawk777 has joined #m-labs
Hawk777 has quit [Quit: ZNC - http://znc.in]
Hawk777 has joined #m-labs
kmehall has quit [Remote host closed the connection]
kmehall has joined #m-labs
kmehall has quit [Remote host closed the connection]
kmehall has joined #m-labs
gbraad has quit [Ping timeout: 240 seconds]
gbraad has joined #m-labs
nicksydney has quit [Quit: No Ping reply in 180 seconds.]
nicksydney has joined #m-labs
nicksydney has quit [Remote host closed the connection]
Martoni has joined #m-labs
sb0 has joined #m-labs
sb0 has quit [*.net *.split]
Hawk777 has quit [*.net *.split]
marcan has quit [*.net *.split]
zumbi_ has quit [*.net *.split]
playthatbeat has quit [*.net *.split]
proppy has quit [*.net *.split]
zumbi has joined #m-labs
marcan has joined #m-labs
playthatbeat has joined #m-labs
Hawk777 has joined #m-labs
playthatbeat has quit [*.net *.split]
kristianpaul has quit [*.net *.split]
stekern has quit [*.net *.split]
sb0 has joined #m-labs
ysionneau has quit [*.net *.split]
proppy has joined #m-labs
kristianpaul has joined #m-labs
kristianpaul has joined #m-labs
kristianpaul has quit [Changing host]
ysionneau has joined #m-labs
stekern has joined #m-labs
playthatbeat has joined #m-labs
proppy has quit [*.net *.split]
marcan has quit [*.net *.split]
azonenberg has quit [*.net *.split]
ohama has quit [*.net *.split]
awallin has quit [*.net *.split]
marcan` has joined #m-labs
proppy has joined #m-labs
ohama has joined #m-labs
sb0 has quit [*.net *.split]
kmehall has quit [*.net *.split]
early has quit [*.net *.split]
aeris has quit [*.net *.split]
bentley` has quit [*.net *.split]
rjo has quit [*.net *.split]
balrog has quit [*.net *.split]
aeris has joined #m-labs
kmehall has joined #m-labs
awallin has joined #m-labs
balrog has joined #m-labs
early has joined #m-labs
sb0 has joined #m-labs
rjo has joined #m-labs
proppy has quit [Changing host]
proppy has joined #m-labs
playthatbeat has quit [Ping timeout: 252 seconds]
playthatbeat has joined #m-labs
awallin has quit [*.net *.split]
marcan` has quit [*.net *.split]
gric has quit [*.net *.split]
marcan has joined #m-labs
gric has joined #m-labs
awallin has joined #m-labs
marcan has quit [Ping timeout: 264 seconds]
marcan has joined #m-labs
ohama has quit [*.net *.split]
stekern has quit [*.net *.split]
ysionneau has quit [*.net *.split]
Martoni has quit [*.net *.split]
asper has quit [*.net *.split]
zeiris_ has quit [*.net *.split]
nengel has quit [*.net *.split]
siruf has quit [*.net *.split]
jaeckel has quit [*.net *.split]
juliusb has quit [*.net *.split]
ysionneau has joined #m-labs
stekern has joined #m-labs
juliusb has joined #m-labs
siruf has joined #m-labs
asper has joined #m-labs
nengel has joined #m-labs
Martoni has joined #m-labs
ohama has joined #m-labs
jaeckel has joined #m-labs
zeiris has joined #m-labs
ysionneau has quit [*.net *.split]
jaeckel has quit [*.net *.split]
gric has quit [*.net *.split]
zumbi has quit [*.net *.split]
zumbi has joined #m-labs
gric has joined #m-labs
ysionneau has joined #m-labs
jaeckel has joined #m-labs
<larsc> mwalle: guess which error message I get when I try to use $clog2 in a localparam with vivado...
Zou is now known as Gurty
<larsc> well nothing goes over a good old ? : tree
nengel has quit [Quit: leaving]
nengel has joined #m-labs
aeris has quit [Read error: Operation timed out]
aeris has joined #m-labs
awallin has quit [Quit: No Ping reply in 180 seconds.]
awallin has joined #m-labs
marcan has quit [Ping timeout: 252 seconds]
marcan has joined #m-labs
azonenberg has joined #m-labs
<GitHub41> [misoc] sbourdeauducq pushed 2 new commits to master: http://git.io/28c0ow
<GitHub41> misoc/master f76da70 Sebastien Bourdeauducq: software/libcompiler-rt: adapt to new upstream directory organization
<GitHub41> misoc/master 0c3f8f7 Sebastien Bourdeauducq: targets/simple: add dummy SDRAM + flash boot address
kmehall has quit [*.net *.split]
kristianpaul has quit [*.net *.split]
gbraad has quit [*.net *.split]
Gurty has quit [*.net *.split]
felix_ has quit [*.net *.split]
gbraad has joined #m-labs
gbraad has joined #m-labs
gbraad has quit [Changing host]
kristianpaul has joined #m-labs
kristianpaul has quit [Changing host]
kristianpaul has joined #m-labs
Gurty has joined #m-labs
kmehall has joined #m-labs
felix__ has joined #m-labs
<GitHub163> [misoc] sbourdeauducq pushed 1 new commit to master: http://git.io/4c1LxA
<GitHub163> misoc/master 4240058 Sebastien Bourdeauducq: README: rewrap
gbraad has quit [Ping timeout: 255 seconds]
gbraad has joined #m-labs
rjo has quit [Ping timeout: 255 seconds]
rjo has joined #m-labs
felix__ is now known as felix_
bvernoux has joined #m-labs
Alain has joined #m-labs
<Alain> Bonsoir "les enfants" ;)
<Alain> oups
Jespers has joined #m-labs
mumptai has joined #m-labs
<larsc> What elephant? Are you calling me fat?
<mumptai> well, if there is an elephant in the room, introduce him
<Jespers> Hello. Interesting project!
<Jespers> What would it take to increase the theoretical resolution/datarate of this project? Just a bigger FPGA? Or more FPGAs?
Alain has quit [Quit: ChatZilla 0.9.90.1 [Firefox 28.0/20140314220517]]
<sb0> Jespers, I guess you mean Mixxeo? faster SERDES or an external HDMI SERDES, and faster SDRAM
<Jespers> I have an idea for a realtime texturemapper/UV-mapper
<Jespers> taking one HD signal with UV reference, (red/green components giving X/Y coordinates)
<Jespers> and one "texture" signal being a normal video source
<Jespers> then remapping the texture positions in accordance to the UV signal. applying some interpolation/filtering/antialiasing
<Jespers> what do you think?
<sb0> what's the use case/market for that?
<Jespers> video DVE. realtime video comping.
<Jespers> building a software gui for better comping
<Jespers> and the letting a "dumb" embedded mapper do the heavy lifting
<sb0> isn't that thing done more easily on a proprietary GPU?
<Jespers> possibly
<Jespers> what about latency?
<Jespers> GPU gives more options
<Jespers> more like a full shader with ligning properties and so on
<sb0> so this will likely end up in no-one-gives-a-shit land
<Jespers> :)
<Jespers> so I’ve been told!
<Jespers> thanks anyway
<sb0> Mixxeo is already a long shot...
<Jespers> as in?
<sb0> as in, I don't expect to sell a lot of them
<Jespers> fair enough
<Jespers> but the underlaying technology and ecosystem is a good thought?
<Jespers> could spring out in some new ideas and so on?
<sb0> yeah, it's a nice device, with cool tech
<sb0> it'll perhaps still sell a reasonable number
<Jespers> I’m doing broadcasting and other live productions for IMAG and streaming. I hate that every vision mixer is limited to some ugly presets and cheap effects
<sb0> but that mapper device sounds even more niche than the mixxeo, and there will be extremely harsh competition from GPUs
<Jespers> I would like to comp more freely like in After Effects, or something, with live sourcer
<Jespers> Yeah, GPUs are getting there.
<Jespers> I guess CUDA/OpenCL/Direct GPU and so on solves the latency bit
<Jespers> 2 frames is absolutlely maximum for live video in broadcast or IMAG
<sb0> hmm, if latency is important you might still beat PCs/GPUs
<Jespers> latency is the #1 feature
<Jespers> 1 frame would be a killer feat
<Jespers> 2 is OK, but the value of the product just barely excists then
<sb0> I guess you are counting frames on the texture (not UV) signal?
<sb0> 1 frame is doable on FPGA, yes
<Jespers> yes
<Jespers> the corelation between UV and texture is not that important to have in frame sync. texture througput is the issue
<Jespers> how about resolutins and framerate?
<sb0> you can generate the output video signal on the fly from the 2 input framebuffers (texture and UV)
<sb0> then you have very low latency
<Jespers> the mapping process itself is just a LUT or something? and then filtering and other comping needs to be applied after that?
<sb0> there aren't really limits, it depends how much you can afford to pay for expensive FPGAs and multilayer board layouts for having lots of SDRAM chips
<Jespers> so the main issue then is to have enough capacity for high resolution and color bitdepth?
<Jespers> what pricerange of FPGAs are we taling about then?
<sb0> $30 - $10000
<Jespers> :P
<Jespers> i get it
<Jespers> and with 4K being the next big thing, then I guess the cheap ones aren’t doing it. Well, GPUs then :)
<sb0> 4K might be possible with a $100-$200 kintex-7
<sb0> but you have to add board layout, fab, FPGA design, etc.
<Jespers> yeah
<Jespers> still will just still remain a dream. I don’t have the knowledge to get to the right knowledge...
<sb0> of course, board layout and FPGA design are fixed costs, divided by the number of devices you are selling. but I guess it'd only be a few, right?
<Jespers> I had a group of students working with me on some rapid acid-testing
<Jespers> we/they concluded that the proejct was not worthy any feasible business model
<Jespers> mostly because of ucertaninty to the technology.
<Jespers> the need to develop to proof the concept
<sb0> what video standard? SDI?
<Jespers> yeah
<Jespers> …and because we found a couple of patents looking an awefull lot like this...
<sb0> just ignore
<Jespers> :)
<sb0> for small companies, any involvement with the patent system is a pure waste of time and money
<sb0> even just looking at them :)
<Jespers> :)
<Jespers> I really just want to have a product which i can feed live video, manipulate on a GUI based intuitive interface, and be broadcast/live-worthy regarding latency
<Jespers> From a business perspective: it could either be a standalone product, a third party plugin system for excisting products or just an IP/reference/idea to sell into the big actors
<sb0> so you'd have another computer running the GUI and generating the UV signal for the mapper?
<Jespers> yup
<Jespers> or "some device" :P
<Jespers> the most intuitive user interface anyways.
<Jespers> like
<Jespers> my freaking iPhone is capable of pushing out the UV signal
<sb0> on SDI?
<Jespers> HDMI....
<Jespers> nearly the same
<sb0> ah, so you want HDMI for UV and SDI for texture
<Jespers> But "anything" can render 2D graphics in HD in realtime
<sb0> nope, HDMI and SDI aren't much the same :)
<Jespers> :)
<Jespers> both then ;)
<sb0> which SDI are you talking about? SMPTE 424M?
<Jespers> a workstation would be able to render several channels of SDI. UV + alpha + overlay/"light"
<Jespers> at least 292M for a start
<Jespers> no
<Jespers> 424M
<sb0> at 2.9Gbps I guess?
<Jespers> for the UV you need 424M to have enough bandwith to cover all possible coordinates
<sb0> maybe a first option for you would be to prototype on a FPGA devkit
<sb0> e.g. KC705
<Jespers> heck, this could be prototyped in software with lower resolution/quality
<sb0> and add the SDI/HDMI stuff on a FMC carrier board
<sb0> it's still a lot of work, but you avoid much of the layout and board fab issues
<sb0> and you can get 1-frame latency performance
<sb0> still, the FPGA design won't be easy... especially if you haven't done such things before
<Jespers> I wouldn’t have started on that by myself :)
<Jespers> the univeristy have a good faculty for embedded design and signal processing
<sb0> the Mixxeo hardware can do it as it is with HDMI, but the SDRAM bandwidth will limit your resolution
<Jespers> can Mixxeo be used as a base, and then upgraded?
<sb0> especially as fetching random pixels as guided by the UV signal is inefficient
<sb0> you can definitely prototype on the Mixxeo... that will give you a good taste of what's involved in the FPGA design
<Jespers> some other things in your projects that can be used? IP, tools, workflow, code?
<sb0> but I guess the resolution won't be more than 800*600 due to SDRAM limitations
<sb0> yes, sure
<sb0> you can take the existing HDMI digitizing, EDID, etc. code
<sb0> SDRAM controller, boot system, CPU, etc.
<Jespers> there are SO many reasons why this idea is fundamentally wrong and dead in the water
<Jespers> GPU alternatives being one, but also the cost of FPGAs being to high, if the demand for performance is to high
<sb0> the only difficult new component to develop is the FPGA core that takes the texture and UV framebuffer and generates the output video signal on the fly
<Jespers> both altera and xilinx have great IP cores for a basis?
<sb0> no they don't, but we do :-)
<Jespers> :)
<Jespers> hah!
<Jespers> I like it
<sb0> but that mapper core needs to be custom anyway
<sb0> all the other stuff is already in misoc
<Jespers> antialiasing/interpolation?
<Jespers> blend modes?
<sb0> the mixxeo can already do 720p60 32bpp mixing
<sb0> from hdmi
<Jespers> cool
<Jespers> 1,5gpbs then
<sb0> the hardware works, right now I'm basically in mechanical design hell
<Jespers> interesting
<sb0> it's less than 1.5Gbps; HDMI splits the video on 3 lanes
<sb0> unlike SDI
<Jespers> oooh
<Jespers> didn’t know that
<sb0> yeah, they are very different
<sb0> Mixxeo doesn't support SDI at all, btw
<sb0> and we don't have SDI cores either
<Jespers> but once in the framebuffer, it’s just pixel data anyways? "all" you need is to get it in there and out again?
<sb0> yes
<Jespers> SDI cores can be bought?
<Jespers>  well
<Jespers> don’t let me hold you any longer
<sb0> I'd rather recommend you have it developed for MiSoC...
<sb0> but HDMI - for which we have input and output cores - will give you a good taste already :)
<Jespers> excuse my lack of knowlegde
<Jespers> MiSoc being a collection of cores?
<Jespers> or something like that?
<sb0> yes
<sb0> it's cores + integration system + basic support software
<Jespers> cool
<Jespers> you have developed?
<sb0> yes
<sb0> it's BSD-licensed, too
<Jespers> what do you do for a living?
<Jespers> this?
<sb0> yes
<Jespers> i whish you best of luck!
<Jespers> Don’t let me hold you any longer. This is most likely not gonna happen :) Thank’s for your time anyway.
<sb0> if you're serious about this project
<sb0> I'd recommend you get a Mixxeo and write that mapper core
<Jespers> I was really serious half a year ago.
<sb0> since the rest is already working on that board
* mumptai never understood the LGPL for IP-cores trend anyway ...
<sb0> maybe http://enjoy-digital.fr/ can write that mapper core for you, btw
<sb0> as well as SDI cores perhaps, if you decide to continue that way
<Jespers> Thanks mate
<Jespers> But repositioning is just the first step. The effect wouldn’t work unless you did some alpha overlay of some kind. We’re getting pretty close to giving GPU the win, if just the latency could be acceptable on that side. Maybe it is. I dont’t know (as you allready understand)
<sb0> what do you mean by alpha overlay?
<Jespers> another layer of videoi
<Jespers> *video
<sb0> so 3 inputs?
<Jespers> i’m afraid so
<Jespers> well, 4 :(
<Jespers> or
<Jespers> no, at least 3
<sb0> you'd use the 4th as alpha channel?
<sb0> you can use UV+alpha in one input
<Jespers> yes
<sb0> either way it merely increases the memory bandwidth...
<Jespers> I need to speak to someone on the GPU side of things first. This feels like a wrong solution to something that’s even not a problem worth solving, like your initial response said
<Jespers> But darn, i just want to have that capability! To freely make comps and recall them with live video sources!
<Jespers> In a simple product inspired by the liberating trends of brands like Blackmagic Design and such
<sb0> well, if you want to do it to have very low latency, get a taste of hardware design, and make something that is 'real' and simple to use, then go for it
<sb0> but the market might be tough.
<Jespers> :)
<Jespers> During a workshop, I needed to present the idea. I actually had a company make a simple video to sell the idea. I got it the same day my group concluded with failure of the project, and the first and only time it was shown was when declearing defeat. Since then I still haven’t understood why the initial idea isn’t worth looking more into. Just havent found the right guys to talk to yet. I found Mixxeo during a terrible drunken party at my
<Jespers> office the first week of january. Everyone had a great time, I ended up at my desk surfing randomly and found Mixxeo which kinda proved the technology sides of things. What a hangover the day after! Anyways, here’s that silly video: https://www.dropbox.com/s/o7w528uojvqsdur/MultiBox.mov
<Jespers> Thanks again. I’ll hang around to follow your projects!
<sb0> hey, nice video! :-)
<sb0> you're much better at this stuff than I am
bvernoux has quit [Ping timeout: 240 seconds]
<Jespers> better at buying expensive videos, you say? :P
<sb0> ah, you had it made?
<Jespers> yeah.
<Jespers> it was really funny to pay the bill after killing the project the first time :)
<Jespers> Where are you located?
<sb0> well, the FPGA way will work, but it'll need significant effort
<sb0> Berlin for now, Hong Kong after July
<Jespers> Initially from sweden?
<sb0> no, I'm French... but I did live in Sweden 2008-2009
<Jespers> ah, makes sense. Your name and everything
<Jespers> I’m from Norway. But I’m kinda involved in the Swedish project CasparCG, from the swedish national broadcaster SVT. Well, I haven’t contributed with code, but I’m activly using it and helps out where I can in the community around it.
<Jespers> your master thesis describes texture mapping
<Jespers> based on verticies
<Jespers> i "just" need to have each pixel assigned a dynamic value sampled from a refrence signal ;)
<sb0> yeah, it's simpler than the TMU from my thesis I think - but the problem is the memory bandwidth required for high resolutions
<Jespers> but the verticies, being read from RAM. Could have been recieved on some bus from a host computer in realtime? less data then sending a full refrence singal, I guess? more efficient?
<sb0> that would save a bit of memory bandwidth...
<Jespers> I really have no idea what I’m talking about. just thinks of a system with a "dumb" box being super good at one operation, transforming a frame buffer in accordance to some instructions. and those instructions being sent from a smart system with the best UI ever. and the simple philosophy of "every image/video frame consists of pixels. if you have the control over each pixel, you control everything". SDI, HDMI, VGA, DVI. once in the
<Jespers> framebuffer, they’re all alike. and still, every vison mixer and scaler on the market offer the same dull transformation capabilities. something is wrong when a $20.000 mixer don’t support Z-axis rotation, and when a $25.000 scaler highlights "strobing" as a main feature (when did you last see a powerpoint on a large screen being strobed for a "naise" effect? :P) Video textures should be painted onto a canvas in a free form. Described in
<Jespers> scene, stored and recalled on demand. For immersive comps and effects. There, time to find the bed :) Thanks again Sébastien. Good night.
mumptai has quit [Quit: Verlassend]
bentley` has joined #m-labs
<sb0> Jespers, they're not alike, even in the framebuffer :-) e.g. SDI may use YUV instead of RGB, and SDI and HDMI may have interlacing, which complicates the mapping.
<sb0> and of course, developing each and every cable-to-framebuffer interface is a pain
<sb0> especially as you'll need fancy FPGA features like SERDES, PLL, transceivers, etc. and the design tools tend to have more bugs than usual (and the usual is already high) on those special features
<sb0> better support a single standard
<sb0> and support it well
<sb0> at least to begin with...
Jespers_ has joined #m-labs
Jespers_ has quit [Quit: Colloquy for iPhone - http://colloquy.mobi]
aeris has quit [Ping timeout: 240 seconds]
Jespers_ has joined #m-labs
Jespers_ has quit [Client Quit]
sb0 has quit [Quit: Leaving]
aeris has joined #m-labs