I have problem with handling multiple video windows in directshow.
I have app in c# that captures video stream of my webcam and sends it through infinite pin tree to 2 separate renderers. When I run the app each of the renderers gets its own VideoWindow. Now when I set VideoWindow Style I can see changes only on one of them (looks like it is 2nd renderer window). Secound window is not effected at all. What should I do to access both of them? There is only one graph to build entire thing. Is there a way to reference videowindow by the renderer not by the graph? I was looking around web but there is nothing thats was able to point me in to good direction.
Related
I m working on this project, the client gave me the task to design the UI and embed videos. The backend code of that application is complete according to the client and the backend code is really poorly written with zero optimization.
The application is working perfectly, the application starts the main form loads axWindowsMediaPlayer form with it self and axWindowsMediaPlayer loads videos through resources. The issue is that in the beginning of each video the media player blinks, like if the playlist have 3 videos it'll blink 3 times and if I run the axWindowsMediaPlayer form separately it doesn't blink at all.
I've no idea what to do here.
Standart windows media player control is way too dated. It's basically good for nothing at this point. I'd just open nuget package list and look for any video player control. In case there is none, you can always just add a chromium-based webview element and play your video in there.
I'm building a C# application where 2 or more cameras are connected to a processing module that has one or more outputs. I need to be able to connect "monitor" windows to preview each camera and the processed output that can be hidden or shown independently, with additional processing filters added to the stream while the video program is running.
Conceptually, I'm trying to build something that looks like this:
(source: fkeinternet.net)
(Using the Video Mixing Filter from the Video Processing Project, I can actually build the above graph and have it run with the three video renderers displaying their respective video streams - in ActiveMovie windows, not C# form windows. Building a graph is not exactly the problem, building the complete application is the issue.)
Building on example code from the DirectShow .NET project, combined with code generated by GraphEditPlus, I can build a basic application with the video stream from a single camera displayed in a C# form window. I'm in the process of debugging multiple preview windows operating simultaneously, but I've realized there are other issues:
One of the problems with the graph illustrated above is that if any of the output windows are closed, the whole graph stops. Another is that it doesn't allow adding filters in the processing stage without stopping the whole graph and rebuilding it.
My idea is to break the monolithic graph into separate source, processing and display graphs so that each piece can be started or stopped as needed, something like this:
(source: fkeinternet.net)
I'm assuming I'd have to keep one graph running all the time to provide a "master clock" source for everything else (probably the "Processing Graph" component), but I'm not quite sure how to do that.
Is there a "standard" way for connecting multiple graphs together? For that matter, is it even possible? I've done a number of searches along the lines of "c# directshow connect two graphs" but all of the links returned are related to connecting filters together, not graphs. Am I asking the wrong question?
I stumbled across this post which pointed me to the answer I was looking for - GMFBridge
I am creating an app with multiple functionalities one of those needs access to front facing camera. I do not set it as a necessity to install the app in the manifest because I want the user to have access to the other functionalities even if front facing camera is not present. I do need however to notify the user whenever he starts said functionality that front facing camera is needed and it cannot run. How can that be done programmatically?
I have searched around the web and only found ways to exclude devices that do not have a front facing camera. That is not what I need however and I am wondering if it is even possible to do so....
The Microsoft.Devices.Camera class offers information like
Camera.IsCameraTypeSupported(CameraType.FrontFacing)
I've found more info about creating and manipulating cameras here on MSDN.
I have been fooling around with screen capture for awhile now and I managed to capture the entire screen, certain areas on the screen etc...
But when I go into a game and try to capture the screen, it completely ignores the game and instead, captures the desktop (or whatever is behind the game window).
Another interesting fact is that the same thing happens with the PrtScn button.
Any ideas on how to capture a game's screen?
The screen capturing technique you are using works well for capturing things that aren't hardware accelerated. I suspect you'd have the same problem trying to capture a movie frame in Windows Media Player.
The solution is the do a screen capture from the hardware itself using DirectX. This article explains how to do that with some code and a managed wrapper around DirectX called SlimDX.
EDIT
If Slim DX doesn't work for you, then you'd just have to find another managed wrapper around Direct X. I don't think you are going to be able to do the screen capture without working at the hardware level, and DirectX is the means of doing that on the Windows platform.
As you might surmise from the question title, we are required to decode and display multiple (e.g., eight) H.264 encoded videos at the same time (and keep them all time synchronized, but that's another question for another time). The videos are usually at at 25 FPS with a resolution of 640x480.
I'm going to provide a bit of background before I get to the crux of the problem.
The feature needs to be baked into a fairly large C# 3.5 (WinForms) application. The videos will occupy rectangles in the application - the managed code needs to be able to specify where each video is drawn as well as it's size.
We get the H264 packets in C# and fire them into a native H264 decoder to get YUV12 image data.
An early attempt consisted of converting the YUV12 images to RGB24 and BitBlt'ing them to a HWND passed into the native code from C#. While functional, all BitBlt'ing had to happen on the UI thread which caused it to bog down when displaying more than a couple videos (on a 2.6 GHZ core 2 duo).
The current attempt spins up one-thread-per-cpu-core on startup and load balances the decoding/displaying of videos across these threads. The performance of this is mind-blasting (I find watching task manager much more interesting than the videos being displayed). UI-wise, it leaves a lot to be desired.
The millisecond we started drawing to an HWND created on the UI thread (e.g., a panel docked in a WinForms control) from a non-UI thread, we started getting all sorts of funky behavior due to the un-thread-safeness of WinForms. This led us to create the HWND's in native code and draw to those, with C# providing the rectangles they should be drawn to in screen coordinates.
Gah! CanOfWorms.Open().
Problem: When the C# application receives focus, it jumps to the front of the Z-Order and hides the video windows.
Solution: Place the video windows Always On Top.
Problem: When the user switches to another application, the video windows are still on top.
Solution: Detect activation and deactivation of the C# application and show/hide the video windows accordingly.
Problem: User says, "I want my videos playing on one monitor while I edit a Word document in the other!"
Solution: Tell user to shut up and that Word sucks anyways.
Problem: I get fired.
etc. etc.
I guess the crux of the problem is that we have HWND's created on a non-UI thread and we want to 'simulate' those being embedded in the C# application.
Any thoughts/suggestions? Am I completely out to lunch here?
I'm more than open to taking a completely different approach if one exists (This project required a lot of learning - winning the lottery would have a greater likelihood than me having picked the best approach at every step along the road).
Forget about BitBlt-ing and do this:
for each window you want your video to be played, create one DirectShow graph and attach the renderer of the graph to that window
before renderer in the graph put the samplegrabber filter. It will allow you to have callback in which you'll be able to fill the buffer
instead of blitting, decode to the buffer provided in samplegrabber.
In addition, I guess that you'll be able to put raw YUV12 into the buffer, as VMRenderer is able to display them directly.
Use DirectShowNet library.
EDIT:
And yes, BTW, if the videos are on the same 'canvas', you can use same technique with renderer and create only one large window, then shift decoded video rectangles 'by hand' and put them into the framebuffers buffer.
YET ANOTHER EDIT:
BitBlts are ALWAYS serialized, i.e. they can't run in parallel.
The millisecond we started drawing to an HWND created on the UI thread (e.g., a panel docked in a WinForms control) from a non-UI thread, we started getting all sorts of funky behavior due to the un-thread-safeness of WinForms. This led us to create the HWND's in native code and draw to those, with C# providing the rectangles they should be drawn to in screen coordinates.
What kind of funky behavior?
If you mean flickering or drawing delay, have you tried to lock() the panel or any other class for thread/drawing synchronisation?
Again: Whats the exact problem when you send the data to the decoder, receive a image, convert it and then draw it with an OnPaint handler. (Setup a different thread that loops at 25fps, call panel1.Invalidate() then)
I guess the crux of the problem is that we have HWND's created on a non-UI thread and we want to 'simulate' those being embedded in the C# application.
Don't do that. Try to draw the received data in your c# application.
In general, I wouldn't reccomend mixing native code and c#. Having the h264 decoder in native code is the only exception here.
Use your threads to decode the video packets (as you already do) then have one thread that loops and calls Invalidate(as said above). Then have an OnPaint handler for each panel you are displaying a video in. In this handler get the most recent video picture and draw it (e.Graphics).
I hope this helps, but also need more information about the problem...
I like the DirectShow answer posted earlier, but I wanted to include an additional option that might be easier for you to implement, based on this excerpt from your question:
While functional, all BitBlt'ing had to happen on the UI thread which caused it to bog down when displaying more than a couple videos
My idea is to start from that code, and use the Async CTP for Visual Studio 2010 that is currently available and includes a go-live license. From there it should be a relatively simple to modify this existing code to be more responsive: just add await and async keywords in a few places and the rest of the code should be largely unchanged.