Fastest way to Decode and Display many H264 Videos Simultaneously C# - c#

As you might surmise from the question title, we are required to decode and display multiple (e.g., eight) H.264 encoded videos at the same time (and keep them all time synchronized, but that's another question for another time). The videos are usually at at 25 FPS with a resolution of 640x480.
I'm going to provide a bit of background before I get to the crux of the problem.
The feature needs to be baked into a fairly large C# 3.5 (WinForms) application. The videos will occupy rectangles in the application - the managed code needs to be able to specify where each video is drawn as well as it's size.
We get the H264 packets in C# and fire them into a native H264 decoder to get YUV12 image data.
An early attempt consisted of converting the YUV12 images to RGB24 and BitBlt'ing them to a HWND passed into the native code from C#. While functional, all BitBlt'ing had to happen on the UI thread which caused it to bog down when displaying more than a couple videos (on a 2.6 GHZ core 2 duo).
The current attempt spins up one-thread-per-cpu-core on startup and load balances the decoding/displaying of videos across these threads. The performance of this is mind-blasting (I find watching task manager much more interesting than the videos being displayed). UI-wise, it leaves a lot to be desired.
The millisecond we started drawing to an HWND created on the UI thread (e.g., a panel docked in a WinForms control) from a non-UI thread, we started getting all sorts of funky behavior due to the un-thread-safeness of WinForms. This led us to create the HWND's in native code and draw to those, with C# providing the rectangles they should be drawn to in screen coordinates.
Gah! CanOfWorms.Open().
Problem: When the C# application receives focus, it jumps to the front of the Z-Order and hides the video windows.
Solution: Place the video windows Always On Top.
Problem: When the user switches to another application, the video windows are still on top.
Solution: Detect activation and deactivation of the C# application and show/hide the video windows accordingly.
Problem: User says, "I want my videos playing on one monitor while I edit a Word document in the other!"
Solution: Tell user to shut up and that Word sucks anyways.
Problem: I get fired.
etc. etc.
I guess the crux of the problem is that we have HWND's created on a non-UI thread and we want to 'simulate' those being embedded in the C# application.
Any thoughts/suggestions? Am I completely out to lunch here?
I'm more than open to taking a completely different approach if one exists (This project required a lot of learning - winning the lottery would have a greater likelihood than me having picked the best approach at every step along the road).

Forget about BitBlt-ing and do this:
for each window you want your video to be played, create one DirectShow graph and attach the renderer of the graph to that window
before renderer in the graph put the samplegrabber filter. It will allow you to have callback in which you'll be able to fill the buffer
instead of blitting, decode to the buffer provided in samplegrabber.
In addition, I guess that you'll be able to put raw YUV12 into the buffer, as VMRenderer is able to display them directly.
Use DirectShowNet library.
EDIT:
And yes, BTW, if the videos are on the same 'canvas', you can use same technique with renderer and create only one large window, then shift decoded video rectangles 'by hand' and put them into the framebuffers buffer.
YET ANOTHER EDIT:
BitBlts are ALWAYS serialized, i.e. they can't run in parallel.

The millisecond we started drawing to an HWND created on the UI thread (e.g., a panel docked in a WinForms control) from a non-UI thread, we started getting all sorts of funky behavior due to the un-thread-safeness of WinForms. This led us to create the HWND's in native code and draw to those, with C# providing the rectangles they should be drawn to in screen coordinates.
What kind of funky behavior?
If you mean flickering or drawing delay, have you tried to lock() the panel or any other class for thread/drawing synchronisation?
Again: Whats the exact problem when you send the data to the decoder, receive a image, convert it and then draw it with an OnPaint handler. (Setup a different thread that loops at 25fps, call panel1.Invalidate() then)
I guess the crux of the problem is that we have HWND's created on a non-UI thread and we want to 'simulate' those being embedded in the C# application.
Don't do that. Try to draw the received data in your c# application.
In general, I wouldn't reccomend mixing native code and c#. Having the h264 decoder in native code is the only exception here.
Use your threads to decode the video packets (as you already do) then have one thread that loops and calls Invalidate(as said above). Then have an OnPaint handler for each panel you are displaying a video in. In this handler get the most recent video picture and draw it (e.Graphics).
I hope this helps, but also need more information about the problem...

I like the DirectShow answer posted earlier, but I wanted to include an additional option that might be easier for you to implement, based on this excerpt from your question:
While functional, all BitBlt'ing had to happen on the UI thread which caused it to bog down when displaying more than a couple videos
My idea is to start from that code, and use the Async CTP for Visual Studio 2010 that is currently available and includes a go-live license. From there it should be a relatively simple to modify this existing code to be more responsive: just add await and async keywords in a few places and the rest of the code should be largely unchanged.

Related

Graph restarts after monitor change

I have a DirectShow graph which records and displays a video source. When I move Video Renderer window to other monitor, what I recorded gets deleted and recording starts again. I searched and found this link which says changing monitor stops and starts the graph. How can I stop the graph from being restarted? I don't want to lose my recording while switching between monitors.
Thanks
The behavior you are describing is basically behavior by design (even though the side effect is pretty much annoying and confusing). Moving a video renderer between the monitors makes it re-allocate hardware resources used to present video, and this in turn needs a state transition. For recording, state transition means opening and closing the file.
Your solution is to either split into presentation and recording graphs, or to use custom allocator/presenter to take care of presentation yourself the way you want. Supposedly, graph splitting (what Wimmel suggests in another answer) is the preferable way adding other degrees of freedom in particular.
There is probably a good reason that the EC_DISPLAY_CHANGED Message behaves that way, so I don't know what the disadvantages are when you handle this message yourself and don't restart the graph.
Instead you could separate the rendering graph from the recording using GMFBridge. Use one graph to capture and record. Use the second graph only for rendering, so restarting that graph would not stop the recording.
Edit: Possibly you need to disconnect before the second graph is restarted. That will mean you do need to process the EC_DISPLAY_CHANGED message, even if you use GMFBridge.
m_pController->BridgeGraphs(NULL, NULL);

WPF realtime chart application architecture

I have the following scenario in mind:
I want to send (via serial port) some commands to a device. This device does send me back a continuous stream of data (max. 12000 values per second).
To control some settings I need some buttons to send commands to the device to start/stop/change settings before and during data stream. Also I want to have a real time plot of this data. I will filter this data of course. Also at certain timestamps there will be a signal which indicates that I want to cut out a certain window of the received data.
This means I will have two charts. I made already some progress using WPF but now when I interact (zoom/pan) with the lower chart, the upper one freezes noticeable. This is because both have do be refreshed very often!
Work (data receiving/filtering) is done using threads but the update of the plot has to be done within the ui thread.
Any ideas how to solve this issue? Maybe using multiple processes?
You should use Reactive Extensions. It was built for this kind of thing.
http://msdn.microsoft.com/en-us/data/gg577609.aspx
Requesting a clear, picturesque explanation of Reactive Extensions (RX)?
On this second link, although the topic is javascript, much of what it says is about Reactive Extensions and cross-applies to Rx in C#.
I'm making a similar WPF application with real-time waveforms (about 500Hz). I have a background threads that receives real-time data, a separate threads to process them and prepare the data for drawing (I have a buffer with the "size" of the screen where I put the prepared values). In the UI thread I draw the waveforms to the RenderTargetBitmap which is in the end is rendered to the Canvas. This technique allows me have a lot of real-time waveforms on the screen and have zoom and pan working without any problems (about 40-50 fps).
Please let me know if you need some technical details, I can later share them with you.
I think you have some code in the UI thread that is not optimized well or can be moved to the background thread.
Btw, do you use any framework for charts?
Edit
philologon is right, you should use Rx for real-time data, it simplifies code A LOT. I also use them in my project.
Its a commercial product but there is a real-time WPF chart which can handle this use-case and then some. Please take a look at the Tutorial below:
http://www.scichart.com/synchronizing-chartmodifier-mouse-events-across-charts/
There is a live Silverlight demo of this behaviour here:
Sync Multichart Mouse Silverlight Demo
And this chart should be able to handle zooming while inputting values at high speed:
Realtime Performance Demo
Disclosure: I am the owner and tech-lead of SciChart

Capture visual output of a DirectX application - even in background?

I need to capture the visual output (like a screenshot) of a DirectX window.
Currently, I use this approach.
But, when the window is in background, it captures whatever is in front of it.
I see that DirectX windows render even when minimized or in background, so this should be possible.
But, how? (It also needs to be fast, and it needs to work on Windows XP too, unfortunately...)
Edit: I am very busy these days... Don't worry, I'll put the bounty back if it expires.
To capture Direct3D windows that are in the background (or moved off screen), I believe you have the following options:
Inject and hook Direct3D within the target application via the link you have already posted or this more up-to-date example (EasyHook can be difficult to get setup but it does work really well) - you can always ask for help about getting it working. I have used that technique for capturing in a number of games without issues (most recently for an ambilight-clone project). The problem with this approach is your concern about game protection causing bans, however FRAPs also uses hooking to achieve this, so perhaps your concerns are exaggerated? I guess gamers being banned for a screen shot is an expensive way of finding out.
For windowed applications on Vista/Win 7 - you could inject and hook the DWM and make your capture requests through its shared surface. I have had this working on Vista, but have not finished getting it working on Windows 7, here is an example of it working for Windows 7 http://www.youtube.com/watch?v=G75WKeXqXkc. The main problem with this approach is the use of undocumented API's which could mean your application breaks without any warning upon a windows patch release - also you would have to redo the technique for each new major Windows flavour. This also does not address your need to capture in Windows XP.
Also within the DWM, there is a thumbnail API. This has limitations depending on what your trying to do. There is some information on this API along with other DWM API's here http://blogs.msdn.com/b/greg_schechter/archive/2006/09/14/753605.aspx
There are other techniques for intercepting the Direct3D calls without using EasyHook, such as substituting the various DLL's with wrappers. You will find various other game hooking/interception techniques here: http://www.gamedeception.net/
Simply bring the Direct3D application to the foreground (which I guess is undesirable in your situation) - this wouldn't work for off-screen windows unless you also move the window.
Unfortunately the only solution for Windows XP that I can think of is intercepting the Direct3D API in some form.
Just a clarification on Direct3D rendering while minimised. During my fairly limited testing on this matter I have found this to be application dependant; it is generally not recommended that rendering take place while the application is minimized (also this reference), it does continue to render while in the background however.
UPDATED: provided additional link to more up-to-date injection example for point 1.
A quick google and i found this Code Project which relates to Windows XP. I dont know if you can apply this knowledge to Windows Vista and 7??
http://www.codeproject.com/Articles/5051/Various-methods-for-capturing-the-screen
EDIT:
I found this article as well:
http://www.codeproject.com/Articles/20651/Capturing-Minimized-Window-A-Kid-s-Trick
This links off from Justins blog post here from the comments. It seems he was working on this with someone (i see thats your link about).
http://spazzarama.com/2009/02/07/screencapture-with-direct3d/
The code that you linked to (from spazzarama), which you said you were using in your project, captures the front buffer of your DirectX device. Have you tried capturing the back buffer instead? Going from the code on your linked site, you would change line 90 from
device.GetFrontBufferData(0, surface);
to
Surface backbuffer = device.GetBackBuffer(0, 0, BackBufferType.Mono);
SurfaceLoader.Save("Screenshot.bmp", ImageFileFormat.Bmp, backbuffer);
This would also involve removing lines 96-98 in your linked example. The backbuffer might be generated without the obstructing window.
EDIT
Nevermind all of that. I just realized that your linked sample code is using the window handle to define a region of the screen, and not actually doing anything with the DirectX window. Your sample code won't work around the obstruction because your region is already drawn with the other window in front of it by the time you access it.
Your best bet to salvage the application is probably to bring the DirectX window to the top of the screen before running the code to capture the image. You can use the Wind32API BringWindowToTop function to do that (http://msdn.microsoft.com/en-us/library/ms632673%28VS.85%29.aspx).

C# drawing disappears (actually more system question)

OK, I am sure some of you already know whats happening just by my title, since I get this is very common question. But my question is in fact little deeper, so please be patient wíth me.
All my programing I have done in past years were in Assembler, mainly 8051 and AVR as weel as in C, but also for microcontrollers. I was more fascinated by HW than SW. But I am also fascinated with function of OS, its APIs and so on. Few days later I told my friend that to create a very simple program to plot a function graph should be very easy, if you had math parser. He doesent believed me so I tried to make one.
I decided to go with C#, even I have no knowledge of OOP. But I thought that if I get everything done in one buttons action it would be like good old C.
So I get math parser to work, and than started to draw using Pen object. My first attempt was to draw simple line. After reading one tutorial a managed to do so, and I created simple axis for my plot.
But than I noticed something strange, when I minimalised my program, drawing dissapears. This made me think a bit about how this all drawing is done on system level.
I thought that system hold image of active window untill its changed. So when you move your windows it just changes its position in famebuffer. And when you minimase it, it just skips it during drawind to framebuffer.
But I saw its not like this. So, please, could you tell me why is this happening? I can read how to prevent it in many tutorials, but I would want to know more why. More, wheather this is becouse of how system API works, or becouse how C# drawing class works.
Also, this made me think what in C# and .NET libraries is function thats just a call for WinAPI function that works exactly the same way, and how many libraries and function do something more. Like if there was no function to draw line in GDI, and you could only draw dot, than C# would add function to draw line from this dots. I hope you understand me.
Thank you.
This is how it works in the Win32 API. When the window is minimized, the area which it occupied gets "invalidated" so the windows system knows that this area of the screen needs to be redrawn. This leads to a WM_PAINT message being sent to the windows program(s) responsible of drawing that area. You can read more about invalidating the client area (the area of which your program is responsible) here.
If you're truly interested in this stuff and want to get deeper understanding on how the system handles drawing (and other things, like windows messages), I recommend reading more on the Win32 API, e.g. beginning on Charles Petzold's classic, Programming Windows.
Your drawing didn't disappear, it simply isn't there. Bear with me:
to draw on the windows window, you have to respond to the callback indicated by the WM_PAINT message. It was in Win 3.11 and it is so NOW.
to draw on button click is a waste of time, because next form/control/window repainting will draw background color there
move the same code from button event handler to OnPaint - of course, handle the differences in semantics
windows don't save the copy of your screen buffer - your drawing - so you have to save it somewhere or draw it on-the-fly
In Windows (prior to Vista / DWM and MIL) the application is responsible for drawing its own GUI. That is, the application has to paint its own GUI When the operating system tells the application to do so. Resizing or moving a form will trigger the paint event. This is how it works in User32+GDI. That is, the application draws its own pixels.
A WPF application will however use the Media Integration Layer (Vista and Windows 7) and the "mil core" is responsible for drawing the visual tree of the application. In this case the operating system is responsible for the rendering, but the application is responsbile for what it wants to be rendered.
If you want to plot, use microsoft chart controls.
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=130F7986-BF49-4FE5-9CA8-910AE6EA442C&displaylang=en
or ZedGraph
http://zedgraph.org/
If you want to plot yourself: The window is redrawn when you resize.
You need to redraw your plot on the redraw event, or whatever it is called.
That is perfectly normal.
Also, use .NET 4.0, because else, you have no possibility of removing anything you drawed programatically, unless you remove (repaint) everything.

How can I edit individual pixels in a window?

I want to create a simple video renderer to play around, and do stuff like creating what would be a mobile OS just for fun. My father told me that in the very first computers, you would edit a specific memory address and the screen would update. I would like to simulate this inside a window in Windows. Is there any way I can do this with C#?
This used to be done because you could get direct access to the video buffer. This is typically not available with today's systems, as the video memory is managed by the video driver and OS. Further, there really isn't a 1:1 mapping of video memory buffer and what is displayed anymore. With so much memory available, it became possible to have multiple buffers and switch between them. The currently displayed buffer is called the "front buffer" and other, non-displayed buffers are called "back buffers" (for more, see https://en.wikipedia.org/wiki/Multiple_buffering). We typically write to back buffers and then have the video system update the front buffer for us. This provides smooth updates, as the video driver synchronizes the update with the scan rate of the monitor.
To write to back buffers using C#, my favorite technique is to use the WPF WritableBitmap. I've also used the System.Drawing.Bitmap to update the screen by writing pixels to it via LockBits.
It's a full featured topic that's outside the scope (it won't fit, not that i won't ramble about it for hours :-) of this answer..but this should get you started with drawing in C#
http://www.geekpedia.com/tutorial50_Drawing-with-Csharp.html
Things have come a bit from the old days of direct memory manipulation..although everything is still tied to pixels.
Edit: Oh, and if you run into flickering problems and get stuck, drop me a line and i'll send you a DoubleBuffered panel to paint with.

Categories

Resources